00:00:00.000 Started by upstream project "autotest-per-patch" build number 132149 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.153 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.154 The recommended git tool is: git 00:00:00.154 using credential 00000000-0000-0000-0000-000000000002 00:00:00.156 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.209 Fetching changes from the remote Git repository 00:00:00.212 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.264 Using shallow fetch with depth 1 00:00:00.264 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.264 > git --version # timeout=10 00:00:00.307 > git --version # 'git version 2.39.2' 00:00:00.307 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.325 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.325 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.232 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.247 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.261 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.261 > git config core.sparsecheckout # timeout=10 00:00:07.274 > git read-tree -mu HEAD # timeout=10 00:00:07.291 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.311 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.311 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.414 [Pipeline] Start of Pipeline 00:00:07.429 [Pipeline] library 00:00:07.430 Loading library shm_lib@master 00:00:07.431 Library shm_lib@master is cached. Copying from home. 00:00:07.445 [Pipeline] node 00:00:22.447 Still waiting to schedule task 00:00:22.447 Waiting for next available executor on ‘vagrant-vm-host’ 00:01:51.864 Running on VM-host-SM4 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:51.866 [Pipeline] { 00:01:51.874 [Pipeline] catchError 00:01:51.875 [Pipeline] { 00:01:51.886 [Pipeline] wrap 00:01:51.895 [Pipeline] { 00:01:51.904 [Pipeline] stage 00:01:51.907 [Pipeline] { (Prologue) 00:01:51.925 [Pipeline] echo 00:01:51.927 Node: VM-host-SM4 00:01:51.932 [Pipeline] cleanWs 00:01:51.979 [WS-CLEANUP] Deleting project workspace... 00:01:51.979 [WS-CLEANUP] Deferred wipeout is used... 00:01:51.986 [WS-CLEANUP] done 00:01:52.180 [Pipeline] setCustomBuildProperty 00:01:52.286 [Pipeline] httpRequest 00:01:52.686 [Pipeline] echo 00:01:52.688 Sorcerer 10.211.164.101 is alive 00:01:52.698 [Pipeline] retry 00:01:52.701 [Pipeline] { 00:01:52.716 [Pipeline] httpRequest 00:01:52.721 HttpMethod: GET 00:01:52.722 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:01:52.723 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:01:52.724 Response Code: HTTP/1.1 200 OK 00:01:52.725 Success: Status code 200 is in the accepted range: 200,404 00:01:52.725 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:01:53.012 [Pipeline] } 00:01:53.032 [Pipeline] // retry 00:01:53.040 [Pipeline] sh 00:01:53.322 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:01:53.338 [Pipeline] httpRequest 00:01:53.735 [Pipeline] echo 00:01:53.737 Sorcerer 10.211.164.101 is alive 00:01:53.749 [Pipeline] retry 00:01:53.751 [Pipeline] { 00:01:53.767 [Pipeline] httpRequest 00:01:53.772 HttpMethod: GET 00:01:53.773 URL: http://10.211.164.101/packages/spdk_e729adafb528ec812886d8928664103ce83c27a6.tar.gz 00:01:53.773 Sending request to url: http://10.211.164.101/packages/spdk_e729adafb528ec812886d8928664103ce83c27a6.tar.gz 00:01:53.774 Response Code: HTTP/1.1 200 OK 00:01:53.775 Success: Status code 200 is in the accepted range: 200,404 00:01:53.775 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_e729adafb528ec812886d8928664103ce83c27a6.tar.gz 00:01:56.599 [Pipeline] } 00:01:56.619 [Pipeline] // retry 00:01:56.628 [Pipeline] sh 00:01:56.983 + tar --no-same-owner -xf spdk_e729adafb528ec812886d8928664103ce83c27a6.tar.gz 00:02:00.305 [Pipeline] sh 00:02:00.586 + git -C spdk log --oneline -n5 00:02:00.586 e729adafb lib/reduce: Add a chunk data read/write cache 00:02:00.586 ed4d6bbb7 lib/reduce: Data copy logic in thin read operations 00:02:00.586 b264e22f0 accel/error: fix callback type for tasks in a sequence 00:02:00.586 0732c1430 accel/error: don't submit tasks intended to fail 00:02:00.586 b53b961c8 accel/error: move interval check to a function 00:02:00.606 [Pipeline] writeFile 00:02:00.621 [Pipeline] sh 00:02:00.904 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:00.916 [Pipeline] sh 00:02:01.202 + cat autorun-spdk.conf 00:02:01.202 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.202 SPDK_TEST_NVMF=1 00:02:01.202 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.202 SPDK_TEST_URING=1 00:02:01.202 SPDK_TEST_USDT=1 00:02:01.202 SPDK_RUN_UBSAN=1 00:02:01.202 NET_TYPE=virt 00:02:01.202 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:01.208 RUN_NIGHTLY=0 00:02:01.210 [Pipeline] } 00:02:01.224 [Pipeline] // stage 00:02:01.239 [Pipeline] stage 00:02:01.242 [Pipeline] { (Run VM) 00:02:01.255 [Pipeline] sh 00:02:01.538 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:01.538 + echo 'Start stage prepare_nvme.sh' 00:02:01.538 Start stage prepare_nvme.sh 00:02:01.538 + [[ -n 6 ]] 00:02:01.538 + disk_prefix=ex6 00:02:01.538 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:02:01.538 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:02:01.538 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:02:01.538 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.538 ++ SPDK_TEST_NVMF=1 00:02:01.538 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.538 ++ SPDK_TEST_URING=1 00:02:01.538 ++ SPDK_TEST_USDT=1 00:02:01.538 ++ SPDK_RUN_UBSAN=1 00:02:01.538 ++ NET_TYPE=virt 00:02:01.538 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:01.538 ++ RUN_NIGHTLY=0 00:02:01.538 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:01.538 + nvme_files=() 00:02:01.538 + declare -A nvme_files 00:02:01.538 + backend_dir=/var/lib/libvirt/images/backends 00:02:01.538 + nvme_files['nvme.img']=5G 00:02:01.538 + nvme_files['nvme-cmb.img']=5G 00:02:01.538 + nvme_files['nvme-multi0.img']=4G 00:02:01.538 + nvme_files['nvme-multi1.img']=4G 00:02:01.538 + nvme_files['nvme-multi2.img']=4G 00:02:01.538 + nvme_files['nvme-openstack.img']=8G 00:02:01.538 + nvme_files['nvme-zns.img']=5G 00:02:01.538 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:01.538 + (( SPDK_TEST_FTL == 1 )) 00:02:01.538 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:01.538 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:01.538 + for nvme in "${!nvme_files[@]}" 00:02:01.538 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:02:01.538 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:01.538 + for nvme in "${!nvme_files[@]}" 00:02:01.538 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:02:01.538 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.538 + for nvme in "${!nvme_files[@]}" 00:02:01.538 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:02:01.538 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:01.538 + for nvme in "${!nvme_files[@]}" 00:02:01.538 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:02:01.538 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.538 + for nvme in "${!nvme_files[@]}" 00:02:01.538 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:02:01.538 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:01.538 + for nvme in "${!nvme_files[@]}" 00:02:01.538 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:02:01.538 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:01.797 + for nvme in "${!nvme_files[@]}" 00:02:01.797 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:02:02.734 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:02.734 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:02:02.734 + echo 'End stage prepare_nvme.sh' 00:02:02.734 End stage prepare_nvme.sh 00:02:02.746 [Pipeline] sh 00:02:03.028 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:03.028 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:02:03.028 00:02:03.028 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:03.028 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:03.028 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:03.028 HELP=0 00:02:03.028 DRY_RUN=0 00:02:03.028 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:02:03.028 NVME_DISKS_TYPE=nvme,nvme, 00:02:03.028 NVME_AUTO_CREATE=0 00:02:03.028 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:02:03.028 NVME_CMB=,, 00:02:03.028 NVME_PMR=,, 00:02:03.028 NVME_ZNS=,, 00:02:03.028 NVME_MS=,, 00:02:03.028 NVME_FDP=,, 00:02:03.028 SPDK_VAGRANT_DISTRO=fedora39 00:02:03.028 SPDK_VAGRANT_VMCPU=10 00:02:03.028 SPDK_VAGRANT_VMRAM=12288 00:02:03.028 SPDK_VAGRANT_PROVIDER=libvirt 00:02:03.028 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:03.028 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:03.028 SPDK_OPENSTACK_NETWORK=0 00:02:03.028 VAGRANT_PACKAGE_BOX=0 00:02:03.028 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:03.028 FORCE_DISTRO=true 00:02:03.028 VAGRANT_BOX_VERSION= 00:02:03.028 EXTRA_VAGRANTFILES= 00:02:03.028 NIC_MODEL=e1000 00:02:03.028 00:02:03.028 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:02:03.028 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:06.416 Bringing machine 'default' up with 'libvirt' provider... 00:02:07.352 ==> default: Creating image (snapshot of base box volume). 00:02:07.352 ==> default: Creating domain with the following settings... 00:02:07.352 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731050964_8dca17a8ab41c4a2556b 00:02:07.352 ==> default: -- Domain type: kvm 00:02:07.352 ==> default: -- Cpus: 10 00:02:07.352 ==> default: -- Feature: acpi 00:02:07.352 ==> default: -- Feature: apic 00:02:07.352 ==> default: -- Feature: pae 00:02:07.352 ==> default: -- Memory: 12288M 00:02:07.352 ==> default: -- Memory Backing: hugepages: 00:02:07.352 ==> default: -- Management MAC: 00:02:07.352 ==> default: -- Loader: 00:02:07.352 ==> default: -- Nvram: 00:02:07.352 ==> default: -- Base box: spdk/fedora39 00:02:07.352 ==> default: -- Storage pool: default 00:02:07.352 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731050964_8dca17a8ab41c4a2556b.img (20G) 00:02:07.352 ==> default: -- Volume Cache: default 00:02:07.352 ==> default: -- Kernel: 00:02:07.352 ==> default: -- Initrd: 00:02:07.352 ==> default: -- Graphics Type: vnc 00:02:07.352 ==> default: -- Graphics Port: -1 00:02:07.352 ==> default: -- Graphics IP: 127.0.0.1 00:02:07.352 ==> default: -- Graphics Password: Not defined 00:02:07.352 ==> default: -- Video Type: cirrus 00:02:07.352 ==> default: -- Video VRAM: 9216 00:02:07.352 ==> default: -- Sound Type: 00:02:07.352 ==> default: -- Keymap: en-us 00:02:07.352 ==> default: -- TPM Path: 00:02:07.352 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:07.352 ==> default: -- Command line args: 00:02:07.352 ==> default: -> value=-device, 00:02:07.352 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:07.352 ==> default: -> value=-drive, 00:02:07.352 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:02:07.352 ==> default: -> value=-device, 00:02:07.352 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:07.352 ==> default: -> value=-device, 00:02:07.352 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:07.352 ==> default: -> value=-drive, 00:02:07.352 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:07.352 ==> default: -> value=-device, 00:02:07.352 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:07.352 ==> default: -> value=-drive, 00:02:07.352 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:07.352 ==> default: -> value=-device, 00:02:07.352 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:07.352 ==> default: -> value=-drive, 00:02:07.352 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:07.352 ==> default: -> value=-device, 00:02:07.352 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:07.610 ==> default: Creating shared folders metadata... 00:02:07.610 ==> default: Starting domain. 00:02:09.512 ==> default: Waiting for domain to get an IP address... 00:02:27.598 ==> default: Waiting for SSH to become available... 00:02:27.598 ==> default: Configuring and enabling network interfaces... 00:02:31.786 default: SSH address: 192.168.121.230:22 00:02:31.786 default: SSH username: vagrant 00:02:31.786 default: SSH auth method: private key 00:02:34.320 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:44.332 ==> default: Mounting SSHFS shared folder... 00:02:45.265 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:45.265 ==> default: Checking Mount.. 00:02:46.639 ==> default: Folder Successfully Mounted! 00:02:46.639 ==> default: Running provisioner: file... 00:02:47.589 default: ~/.gitconfig => .gitconfig 00:02:47.848 00:02:47.848 SUCCESS! 00:02:47.848 00:02:47.848 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:47.848 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:47.848 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:47.848 00:02:47.856 [Pipeline] } 00:02:47.871 [Pipeline] // stage 00:02:47.881 [Pipeline] dir 00:02:47.881 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:47.883 [Pipeline] { 00:02:47.896 [Pipeline] catchError 00:02:47.897 [Pipeline] { 00:02:47.910 [Pipeline] sh 00:02:48.190 + vagrant ssh-config --host vagrant 00:02:48.190 + sed -ne /^Host/,$p 00:02:48.190 + tee ssh_conf 00:02:52.376 Host vagrant 00:02:52.376 HostName 192.168.121.230 00:02:52.376 User vagrant 00:02:52.376 Port 22 00:02:52.376 UserKnownHostsFile /dev/null 00:02:52.376 StrictHostKeyChecking no 00:02:52.376 PasswordAuthentication no 00:02:52.376 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:52.376 IdentitiesOnly yes 00:02:52.376 LogLevel FATAL 00:02:52.376 ForwardAgent yes 00:02:52.376 ForwardX11 yes 00:02:52.376 00:02:52.391 [Pipeline] withEnv 00:02:52.393 [Pipeline] { 00:02:52.407 [Pipeline] sh 00:02:52.686 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:52.686 source /etc/os-release 00:02:52.686 [[ -e /image.version ]] && img=$(< /image.version) 00:02:52.686 # Minimal, systemd-like check. 00:02:52.686 if [[ -e /.dockerenv ]]; then 00:02:52.686 # Clear garbage from the node's name: 00:02:52.687 # agt-er_autotest_547-896 -> autotest_547-896 00:02:52.687 # $HOSTNAME is the actual container id 00:02:52.687 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:52.687 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:52.687 # We can assume this is a mount from a host where container is running, 00:02:52.687 # so fetch its hostname to easily identify the target swarm worker. 00:02:52.687 container="$(< /etc/hostname) ($agent)" 00:02:52.687 else 00:02:52.687 # Fallback 00:02:52.687 container=$agent 00:02:52.687 fi 00:02:52.687 fi 00:02:52.687 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:52.687 00:02:52.958 [Pipeline] } 00:02:52.974 [Pipeline] // withEnv 00:02:52.983 [Pipeline] setCustomBuildProperty 00:02:52.998 [Pipeline] stage 00:02:53.000 [Pipeline] { (Tests) 00:02:53.017 [Pipeline] sh 00:02:53.307 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:53.585 [Pipeline] sh 00:02:53.877 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:54.152 [Pipeline] timeout 00:02:54.153 Timeout set to expire in 1 hr 0 min 00:02:54.155 [Pipeline] { 00:02:54.169 [Pipeline] sh 00:02:54.450 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:55.031 HEAD is now at e729adafb lib/reduce: Add a chunk data read/write cache 00:02:55.042 [Pipeline] sh 00:02:55.321 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:55.593 [Pipeline] sh 00:02:55.873 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:56.149 [Pipeline] sh 00:02:56.432 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:56.690 ++ readlink -f spdk_repo 00:02:56.690 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:56.690 + [[ -n /home/vagrant/spdk_repo ]] 00:02:56.690 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:56.690 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:56.690 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:56.690 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:56.690 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:56.690 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:56.690 + cd /home/vagrant/spdk_repo 00:02:56.690 + source /etc/os-release 00:02:56.690 ++ NAME='Fedora Linux' 00:02:56.690 ++ VERSION='39 (Cloud Edition)' 00:02:56.690 ++ ID=fedora 00:02:56.690 ++ VERSION_ID=39 00:02:56.690 ++ VERSION_CODENAME= 00:02:56.690 ++ PLATFORM_ID=platform:f39 00:02:56.690 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:56.690 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:56.690 ++ LOGO=fedora-logo-icon 00:02:56.690 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:56.690 ++ HOME_URL=https://fedoraproject.org/ 00:02:56.690 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:56.690 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:56.690 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:56.690 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:56.690 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:56.690 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:56.690 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:56.690 ++ SUPPORT_END=2024-11-12 00:02:56.690 ++ VARIANT='Cloud Edition' 00:02:56.690 ++ VARIANT_ID=cloud 00:02:56.690 + uname -a 00:02:56.690 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:56.690 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:57.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:57.259 Hugepages 00:02:57.259 node hugesize free / total 00:02:57.259 node0 1048576kB 0 / 0 00:02:57.259 node0 2048kB 0 / 0 00:02:57.259 00:02:57.259 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:57.259 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:57.259 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:57.259 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:57.259 + rm -f /tmp/spdk-ld-path 00:02:57.259 + source autorun-spdk.conf 00:02:57.259 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.259 ++ SPDK_TEST_NVMF=1 00:02:57.259 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:57.259 ++ SPDK_TEST_URING=1 00:02:57.259 ++ SPDK_TEST_USDT=1 00:02:57.259 ++ SPDK_RUN_UBSAN=1 00:02:57.259 ++ NET_TYPE=virt 00:02:57.259 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:57.259 ++ RUN_NIGHTLY=0 00:02:57.259 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:57.259 + [[ -n '' ]] 00:02:57.259 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:57.259 + for M in /var/spdk/build-*-manifest.txt 00:02:57.259 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:57.259 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.259 + for M in /var/spdk/build-*-manifest.txt 00:02:57.259 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:57.259 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.259 + for M in /var/spdk/build-*-manifest.txt 00:02:57.259 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:57.259 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.259 ++ uname 00:02:57.259 + [[ Linux == \L\i\n\u\x ]] 00:02:57.259 + sudo dmesg -T 00:02:57.518 + sudo dmesg --clear 00:02:57.518 + dmesg_pid=5262 00:02:57.518 + sudo dmesg -Tw 00:02:57.518 + [[ Fedora Linux == FreeBSD ]] 00:02:57.518 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:57.518 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:57.518 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:57.518 + [[ -x /usr/src/fio-static/fio ]] 00:02:57.518 + export FIO_BIN=/usr/src/fio-static/fio 00:02:57.518 + FIO_BIN=/usr/src/fio-static/fio 00:02:57.518 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:57.518 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:57.518 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:57.518 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:57.518 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:57.518 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:57.518 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:57.518 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:57.518 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.518 07:30:15 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:57.518 07:30:15 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.518 07:30:15 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.518 07:30:15 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:57.518 07:30:15 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:57.518 07:30:15 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:57.518 07:30:15 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:57.518 07:30:15 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:57.518 07:30:15 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:57.518 07:30:15 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:57.518 07:30:15 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:57.518 07:30:15 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:57.518 07:30:15 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.518 07:30:15 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:57.518 07:30:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:57.518 07:30:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:57.518 07:30:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:57.518 07:30:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:57.518 07:30:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:57.518 07:30:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.519 07:30:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.519 07:30:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.519 07:30:15 -- paths/export.sh@5 -- $ export PATH 00:02:57.519 07:30:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.519 07:30:15 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:57.519 07:30:15 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:57.519 07:30:15 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731051015.XXXXXX 00:02:57.519 07:30:15 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731051015.yANBB3 00:02:57.519 07:30:15 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:57.519 07:30:15 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:57.519 07:30:15 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:57.519 07:30:15 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:57.519 07:30:15 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:57.519 07:30:15 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:57.519 07:30:15 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:57.519 07:30:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.519 07:30:15 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:57.519 07:30:15 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:57.519 07:30:15 -- pm/common@17 -- $ local monitor 00:02:57.519 07:30:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.519 07:30:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.519 07:30:15 -- pm/common@25 -- $ sleep 1 00:02:57.519 07:30:15 -- pm/common@21 -- $ date +%s 00:02:57.519 07:30:15 -- pm/common@21 -- $ date +%s 00:02:57.519 07:30:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731051015 00:02:57.519 07:30:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731051015 00:02:57.778 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731051015_collect-vmstat.pm.log 00:02:57.778 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731051015_collect-cpu-load.pm.log 00:02:58.720 07:30:16 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:58.720 07:30:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:58.720 07:30:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:58.720 07:30:16 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:58.720 07:30:16 -- spdk/autobuild.sh@16 -- $ date -u 00:02:58.720 Fri Nov 8 07:30:16 AM UTC 2024 00:02:58.720 07:30:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:58.720 v25.01-pre-177-ge729adafb 00:02:58.720 07:30:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:58.720 07:30:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:58.720 07:30:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:58.720 07:30:16 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:58.720 07:30:16 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:58.720 07:30:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.720 ************************************ 00:02:58.720 START TEST ubsan 00:02:58.720 ************************************ 00:02:58.720 using ubsan 00:02:58.720 07:30:16 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:58.720 00:02:58.720 real 0m0.000s 00:02:58.720 user 0m0.000s 00:02:58.720 sys 0m0.000s 00:02:58.720 07:30:16 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:58.720 07:30:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:58.720 ************************************ 00:02:58.720 END TEST ubsan 00:02:58.720 ************************************ 00:02:58.720 07:30:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:58.720 07:30:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:58.720 07:30:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:58.720 07:30:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:58.720 07:30:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:58.720 07:30:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:58.720 07:30:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:58.720 07:30:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:58.720 07:30:16 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:58.720 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:58.720 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:59.288 Using 'verbs' RDMA provider 00:03:15.558 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:30.452 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:30.452 Creating mk/config.mk...done. 00:03:30.452 Creating mk/cc.flags.mk...done. 00:03:30.452 Type 'make' to build. 00:03:30.452 07:30:47 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:30.452 07:30:47 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:30.452 07:30:47 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:30.452 07:30:47 -- common/autotest_common.sh@10 -- $ set +x 00:03:30.452 ************************************ 00:03:30.452 START TEST make 00:03:30.452 ************************************ 00:03:30.452 07:30:47 make -- common/autotest_common.sh@1127 -- $ make -j10 00:03:30.452 make[1]: Nothing to be done for 'all'. 00:03:40.429 The Meson build system 00:03:40.429 Version: 1.5.0 00:03:40.429 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:40.429 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:40.429 Build type: native build 00:03:40.429 Program cat found: YES (/usr/bin/cat) 00:03:40.429 Project name: DPDK 00:03:40.429 Project version: 24.03.0 00:03:40.429 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:40.429 C linker for the host machine: cc ld.bfd 2.40-14 00:03:40.429 Host machine cpu family: x86_64 00:03:40.429 Host machine cpu: x86_64 00:03:40.429 Message: ## Building in Developer Mode ## 00:03:40.429 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:40.429 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:40.429 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:40.429 Program python3 found: YES (/usr/bin/python3) 00:03:40.429 Program cat found: YES (/usr/bin/cat) 00:03:40.429 Compiler for C supports arguments -march=native: YES 00:03:40.429 Checking for size of "void *" : 8 00:03:40.429 Checking for size of "void *" : 8 (cached) 00:03:40.429 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:40.429 Library m found: YES 00:03:40.429 Library numa found: YES 00:03:40.429 Has header "numaif.h" : YES 00:03:40.429 Library fdt found: NO 00:03:40.429 Library execinfo found: NO 00:03:40.429 Has header "execinfo.h" : YES 00:03:40.429 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:40.429 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:40.429 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:40.429 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:40.429 Run-time dependency openssl found: YES 3.1.1 00:03:40.429 Run-time dependency libpcap found: YES 1.10.4 00:03:40.429 Has header "pcap.h" with dependency libpcap: YES 00:03:40.429 Compiler for C supports arguments -Wcast-qual: YES 00:03:40.429 Compiler for C supports arguments -Wdeprecated: YES 00:03:40.429 Compiler for C supports arguments -Wformat: YES 00:03:40.429 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:40.429 Compiler for C supports arguments -Wformat-security: NO 00:03:40.429 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:40.429 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:40.429 Compiler for C supports arguments -Wnested-externs: YES 00:03:40.429 Compiler for C supports arguments -Wold-style-definition: YES 00:03:40.429 Compiler for C supports arguments -Wpointer-arith: YES 00:03:40.429 Compiler for C supports arguments -Wsign-compare: YES 00:03:40.429 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:40.429 Compiler for C supports arguments -Wundef: YES 00:03:40.429 Compiler for C supports arguments -Wwrite-strings: YES 00:03:40.429 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:40.429 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:40.429 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:40.429 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:40.429 Program objdump found: YES (/usr/bin/objdump) 00:03:40.430 Compiler for C supports arguments -mavx512f: YES 00:03:40.430 Checking if "AVX512 checking" compiles: YES 00:03:40.430 Fetching value of define "__SSE4_2__" : 1 00:03:40.430 Fetching value of define "__AES__" : 1 00:03:40.430 Fetching value of define "__AVX__" : 1 00:03:40.430 Fetching value of define "__AVX2__" : 1 00:03:40.430 Fetching value of define "__AVX512BW__" : 1 00:03:40.430 Fetching value of define "__AVX512CD__" : 1 00:03:40.430 Fetching value of define "__AVX512DQ__" : 1 00:03:40.430 Fetching value of define "__AVX512F__" : 1 00:03:40.430 Fetching value of define "__AVX512VL__" : 1 00:03:40.430 Fetching value of define "__PCLMUL__" : 1 00:03:40.430 Fetching value of define "__RDRND__" : 1 00:03:40.430 Fetching value of define "__RDSEED__" : 1 00:03:40.430 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:40.430 Fetching value of define "__znver1__" : (undefined) 00:03:40.430 Fetching value of define "__znver2__" : (undefined) 00:03:40.430 Fetching value of define "__znver3__" : (undefined) 00:03:40.430 Fetching value of define "__znver4__" : (undefined) 00:03:40.430 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:40.430 Message: lib/log: Defining dependency "log" 00:03:40.430 Message: lib/kvargs: Defining dependency "kvargs" 00:03:40.430 Message: lib/telemetry: Defining dependency "telemetry" 00:03:40.430 Checking for function "getentropy" : NO 00:03:40.430 Message: lib/eal: Defining dependency "eal" 00:03:40.430 Message: lib/ring: Defining dependency "ring" 00:03:40.430 Message: lib/rcu: Defining dependency "rcu" 00:03:40.430 Message: lib/mempool: Defining dependency "mempool" 00:03:40.430 Message: lib/mbuf: Defining dependency "mbuf" 00:03:40.430 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:40.430 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:40.430 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:40.430 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:40.430 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:40.430 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:40.430 Compiler for C supports arguments -mpclmul: YES 00:03:40.430 Compiler for C supports arguments -maes: YES 00:03:40.430 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:40.430 Compiler for C supports arguments -mavx512bw: YES 00:03:40.430 Compiler for C supports arguments -mavx512dq: YES 00:03:40.430 Compiler for C supports arguments -mavx512vl: YES 00:03:40.430 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:40.430 Compiler for C supports arguments -mavx2: YES 00:03:40.430 Compiler for C supports arguments -mavx: YES 00:03:40.430 Message: lib/net: Defining dependency "net" 00:03:40.430 Message: lib/meter: Defining dependency "meter" 00:03:40.430 Message: lib/ethdev: Defining dependency "ethdev" 00:03:40.430 Message: lib/pci: Defining dependency "pci" 00:03:40.430 Message: lib/cmdline: Defining dependency "cmdline" 00:03:40.430 Message: lib/hash: Defining dependency "hash" 00:03:40.430 Message: lib/timer: Defining dependency "timer" 00:03:40.430 Message: lib/compressdev: Defining dependency "compressdev" 00:03:40.430 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:40.430 Message: lib/dmadev: Defining dependency "dmadev" 00:03:40.430 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:40.430 Message: lib/power: Defining dependency "power" 00:03:40.430 Message: lib/reorder: Defining dependency "reorder" 00:03:40.430 Message: lib/security: Defining dependency "security" 00:03:40.430 Has header "linux/userfaultfd.h" : YES 00:03:40.430 Has header "linux/vduse.h" : YES 00:03:40.430 Message: lib/vhost: Defining dependency "vhost" 00:03:40.430 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:40.430 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:40.430 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:40.430 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:40.430 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:40.430 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:40.430 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:40.430 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:40.430 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:40.430 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:40.430 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:40.430 Configuring doxy-api-html.conf using configuration 00:03:40.430 Configuring doxy-api-man.conf using configuration 00:03:40.430 Program mandb found: YES (/usr/bin/mandb) 00:03:40.430 Program sphinx-build found: NO 00:03:40.430 Configuring rte_build_config.h using configuration 00:03:40.430 Message: 00:03:40.430 ================= 00:03:40.430 Applications Enabled 00:03:40.430 ================= 00:03:40.430 00:03:40.430 apps: 00:03:40.430 00:03:40.430 00:03:40.430 Message: 00:03:40.430 ================= 00:03:40.430 Libraries Enabled 00:03:40.430 ================= 00:03:40.430 00:03:40.430 libs: 00:03:40.430 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:40.430 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:40.430 cryptodev, dmadev, power, reorder, security, vhost, 00:03:40.430 00:03:40.430 Message: 00:03:40.430 =============== 00:03:40.430 Drivers Enabled 00:03:40.430 =============== 00:03:40.430 00:03:40.430 common: 00:03:40.430 00:03:40.430 bus: 00:03:40.430 pci, vdev, 00:03:40.430 mempool: 00:03:40.430 ring, 00:03:40.430 dma: 00:03:40.430 00:03:40.430 net: 00:03:40.430 00:03:40.430 crypto: 00:03:40.430 00:03:40.430 compress: 00:03:40.430 00:03:40.430 vdpa: 00:03:40.430 00:03:40.430 00:03:40.430 Message: 00:03:40.430 ================= 00:03:40.430 Content Skipped 00:03:40.430 ================= 00:03:40.430 00:03:40.430 apps: 00:03:40.430 dumpcap: explicitly disabled via build config 00:03:40.430 graph: explicitly disabled via build config 00:03:40.430 pdump: explicitly disabled via build config 00:03:40.430 proc-info: explicitly disabled via build config 00:03:40.430 test-acl: explicitly disabled via build config 00:03:40.430 test-bbdev: explicitly disabled via build config 00:03:40.430 test-cmdline: explicitly disabled via build config 00:03:40.430 test-compress-perf: explicitly disabled via build config 00:03:40.430 test-crypto-perf: explicitly disabled via build config 00:03:40.430 test-dma-perf: explicitly disabled via build config 00:03:40.430 test-eventdev: explicitly disabled via build config 00:03:40.430 test-fib: explicitly disabled via build config 00:03:40.430 test-flow-perf: explicitly disabled via build config 00:03:40.430 test-gpudev: explicitly disabled via build config 00:03:40.430 test-mldev: explicitly disabled via build config 00:03:40.430 test-pipeline: explicitly disabled via build config 00:03:40.430 test-pmd: explicitly disabled via build config 00:03:40.430 test-regex: explicitly disabled via build config 00:03:40.430 test-sad: explicitly disabled via build config 00:03:40.430 test-security-perf: explicitly disabled via build config 00:03:40.430 00:03:40.430 libs: 00:03:40.430 argparse: explicitly disabled via build config 00:03:40.430 metrics: explicitly disabled via build config 00:03:40.430 acl: explicitly disabled via build config 00:03:40.430 bbdev: explicitly disabled via build config 00:03:40.430 bitratestats: explicitly disabled via build config 00:03:40.430 bpf: explicitly disabled via build config 00:03:40.430 cfgfile: explicitly disabled via build config 00:03:40.430 distributor: explicitly disabled via build config 00:03:40.430 efd: explicitly disabled via build config 00:03:40.430 eventdev: explicitly disabled via build config 00:03:40.430 dispatcher: explicitly disabled via build config 00:03:40.430 gpudev: explicitly disabled via build config 00:03:40.430 gro: explicitly disabled via build config 00:03:40.430 gso: explicitly disabled via build config 00:03:40.430 ip_frag: explicitly disabled via build config 00:03:40.430 jobstats: explicitly disabled via build config 00:03:40.430 latencystats: explicitly disabled via build config 00:03:40.430 lpm: explicitly disabled via build config 00:03:40.430 member: explicitly disabled via build config 00:03:40.430 pcapng: explicitly disabled via build config 00:03:40.430 rawdev: explicitly disabled via build config 00:03:40.430 regexdev: explicitly disabled via build config 00:03:40.430 mldev: explicitly disabled via build config 00:03:40.430 rib: explicitly disabled via build config 00:03:40.430 sched: explicitly disabled via build config 00:03:40.430 stack: explicitly disabled via build config 00:03:40.430 ipsec: explicitly disabled via build config 00:03:40.430 pdcp: explicitly disabled via build config 00:03:40.430 fib: explicitly disabled via build config 00:03:40.430 port: explicitly disabled via build config 00:03:40.430 pdump: explicitly disabled via build config 00:03:40.430 table: explicitly disabled via build config 00:03:40.430 pipeline: explicitly disabled via build config 00:03:40.430 graph: explicitly disabled via build config 00:03:40.430 node: explicitly disabled via build config 00:03:40.430 00:03:40.430 drivers: 00:03:40.430 common/cpt: not in enabled drivers build config 00:03:40.430 common/dpaax: not in enabled drivers build config 00:03:40.430 common/iavf: not in enabled drivers build config 00:03:40.430 common/idpf: not in enabled drivers build config 00:03:40.430 common/ionic: not in enabled drivers build config 00:03:40.430 common/mvep: not in enabled drivers build config 00:03:40.430 common/octeontx: not in enabled drivers build config 00:03:40.430 bus/auxiliary: not in enabled drivers build config 00:03:40.430 bus/cdx: not in enabled drivers build config 00:03:40.430 bus/dpaa: not in enabled drivers build config 00:03:40.430 bus/fslmc: not in enabled drivers build config 00:03:40.430 bus/ifpga: not in enabled drivers build config 00:03:40.430 bus/platform: not in enabled drivers build config 00:03:40.430 bus/uacce: not in enabled drivers build config 00:03:40.430 bus/vmbus: not in enabled drivers build config 00:03:40.430 common/cnxk: not in enabled drivers build config 00:03:40.430 common/mlx5: not in enabled drivers build config 00:03:40.430 common/nfp: not in enabled drivers build config 00:03:40.430 common/nitrox: not in enabled drivers build config 00:03:40.431 common/qat: not in enabled drivers build config 00:03:40.431 common/sfc_efx: not in enabled drivers build config 00:03:40.431 mempool/bucket: not in enabled drivers build config 00:03:40.431 mempool/cnxk: not in enabled drivers build config 00:03:40.431 mempool/dpaa: not in enabled drivers build config 00:03:40.431 mempool/dpaa2: not in enabled drivers build config 00:03:40.431 mempool/octeontx: not in enabled drivers build config 00:03:40.431 mempool/stack: not in enabled drivers build config 00:03:40.431 dma/cnxk: not in enabled drivers build config 00:03:40.431 dma/dpaa: not in enabled drivers build config 00:03:40.431 dma/dpaa2: not in enabled drivers build config 00:03:40.431 dma/hisilicon: not in enabled drivers build config 00:03:40.431 dma/idxd: not in enabled drivers build config 00:03:40.431 dma/ioat: not in enabled drivers build config 00:03:40.431 dma/skeleton: not in enabled drivers build config 00:03:40.431 net/af_packet: not in enabled drivers build config 00:03:40.431 net/af_xdp: not in enabled drivers build config 00:03:40.431 net/ark: not in enabled drivers build config 00:03:40.431 net/atlantic: not in enabled drivers build config 00:03:40.431 net/avp: not in enabled drivers build config 00:03:40.431 net/axgbe: not in enabled drivers build config 00:03:40.431 net/bnx2x: not in enabled drivers build config 00:03:40.431 net/bnxt: not in enabled drivers build config 00:03:40.431 net/bonding: not in enabled drivers build config 00:03:40.431 net/cnxk: not in enabled drivers build config 00:03:40.431 net/cpfl: not in enabled drivers build config 00:03:40.431 net/cxgbe: not in enabled drivers build config 00:03:40.431 net/dpaa: not in enabled drivers build config 00:03:40.431 net/dpaa2: not in enabled drivers build config 00:03:40.431 net/e1000: not in enabled drivers build config 00:03:40.431 net/ena: not in enabled drivers build config 00:03:40.431 net/enetc: not in enabled drivers build config 00:03:40.431 net/enetfec: not in enabled drivers build config 00:03:40.431 net/enic: not in enabled drivers build config 00:03:40.431 net/failsafe: not in enabled drivers build config 00:03:40.431 net/fm10k: not in enabled drivers build config 00:03:40.431 net/gve: not in enabled drivers build config 00:03:40.431 net/hinic: not in enabled drivers build config 00:03:40.431 net/hns3: not in enabled drivers build config 00:03:40.431 net/i40e: not in enabled drivers build config 00:03:40.431 net/iavf: not in enabled drivers build config 00:03:40.431 net/ice: not in enabled drivers build config 00:03:40.431 net/idpf: not in enabled drivers build config 00:03:40.431 net/igc: not in enabled drivers build config 00:03:40.431 net/ionic: not in enabled drivers build config 00:03:40.431 net/ipn3ke: not in enabled drivers build config 00:03:40.431 net/ixgbe: not in enabled drivers build config 00:03:40.431 net/mana: not in enabled drivers build config 00:03:40.431 net/memif: not in enabled drivers build config 00:03:40.431 net/mlx4: not in enabled drivers build config 00:03:40.431 net/mlx5: not in enabled drivers build config 00:03:40.431 net/mvneta: not in enabled drivers build config 00:03:40.431 net/mvpp2: not in enabled drivers build config 00:03:40.431 net/netvsc: not in enabled drivers build config 00:03:40.431 net/nfb: not in enabled drivers build config 00:03:40.431 net/nfp: not in enabled drivers build config 00:03:40.431 net/ngbe: not in enabled drivers build config 00:03:40.431 net/null: not in enabled drivers build config 00:03:40.431 net/octeontx: not in enabled drivers build config 00:03:40.431 net/octeon_ep: not in enabled drivers build config 00:03:40.431 net/pcap: not in enabled drivers build config 00:03:40.431 net/pfe: not in enabled drivers build config 00:03:40.431 net/qede: not in enabled drivers build config 00:03:40.431 net/ring: not in enabled drivers build config 00:03:40.431 net/sfc: not in enabled drivers build config 00:03:40.431 net/softnic: not in enabled drivers build config 00:03:40.431 net/tap: not in enabled drivers build config 00:03:40.431 net/thunderx: not in enabled drivers build config 00:03:40.431 net/txgbe: not in enabled drivers build config 00:03:40.431 net/vdev_netvsc: not in enabled drivers build config 00:03:40.431 net/vhost: not in enabled drivers build config 00:03:40.431 net/virtio: not in enabled drivers build config 00:03:40.431 net/vmxnet3: not in enabled drivers build config 00:03:40.431 raw/*: missing internal dependency, "rawdev" 00:03:40.431 crypto/armv8: not in enabled drivers build config 00:03:40.431 crypto/bcmfs: not in enabled drivers build config 00:03:40.431 crypto/caam_jr: not in enabled drivers build config 00:03:40.431 crypto/ccp: not in enabled drivers build config 00:03:40.431 crypto/cnxk: not in enabled drivers build config 00:03:40.431 crypto/dpaa_sec: not in enabled drivers build config 00:03:40.431 crypto/dpaa2_sec: not in enabled drivers build config 00:03:40.431 crypto/ipsec_mb: not in enabled drivers build config 00:03:40.431 crypto/mlx5: not in enabled drivers build config 00:03:40.431 crypto/mvsam: not in enabled drivers build config 00:03:40.431 crypto/nitrox: not in enabled drivers build config 00:03:40.431 crypto/null: not in enabled drivers build config 00:03:40.431 crypto/octeontx: not in enabled drivers build config 00:03:40.431 crypto/openssl: not in enabled drivers build config 00:03:40.431 crypto/scheduler: not in enabled drivers build config 00:03:40.431 crypto/uadk: not in enabled drivers build config 00:03:40.431 crypto/virtio: not in enabled drivers build config 00:03:40.431 compress/isal: not in enabled drivers build config 00:03:40.431 compress/mlx5: not in enabled drivers build config 00:03:40.431 compress/nitrox: not in enabled drivers build config 00:03:40.431 compress/octeontx: not in enabled drivers build config 00:03:40.431 compress/zlib: not in enabled drivers build config 00:03:40.431 regex/*: missing internal dependency, "regexdev" 00:03:40.431 ml/*: missing internal dependency, "mldev" 00:03:40.431 vdpa/ifc: not in enabled drivers build config 00:03:40.431 vdpa/mlx5: not in enabled drivers build config 00:03:40.431 vdpa/nfp: not in enabled drivers build config 00:03:40.431 vdpa/sfc: not in enabled drivers build config 00:03:40.431 event/*: missing internal dependency, "eventdev" 00:03:40.431 baseband/*: missing internal dependency, "bbdev" 00:03:40.431 gpu/*: missing internal dependency, "gpudev" 00:03:40.431 00:03:40.431 00:03:40.690 Build targets in project: 85 00:03:40.690 00:03:40.690 DPDK 24.03.0 00:03:40.690 00:03:40.690 User defined options 00:03:40.690 buildtype : debug 00:03:40.690 default_library : shared 00:03:40.690 libdir : lib 00:03:40.690 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:40.690 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:40.690 c_link_args : 00:03:40.690 cpu_instruction_set: native 00:03:40.690 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:40.690 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:40.690 enable_docs : false 00:03:40.690 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:40.690 enable_kmods : false 00:03:40.690 max_lcores : 128 00:03:40.690 tests : false 00:03:40.690 00:03:40.690 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:41.258 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:41.259 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:41.259 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:41.259 [3/268] Linking static target lib/librte_kvargs.a 00:03:41.259 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:41.259 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:41.259 [6/268] Linking static target lib/librte_log.a 00:03:41.518 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:41.776 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:41.776 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:41.776 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:41.776 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:41.776 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:41.776 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.776 [14/268] Linking static target lib/librte_telemetry.a 00:03:41.776 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:41.776 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:41.776 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:42.034 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:42.292 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:42.292 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:42.292 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.292 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:42.551 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:42.551 [24/268] Linking target lib/librte_log.so.24.1 00:03:42.551 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:42.551 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:42.551 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:42.551 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:42.551 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:42.809 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:42.809 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.809 [32/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:42.809 [33/268] Linking target lib/librte_kvargs.so.24.1 00:03:42.809 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:43.067 [35/268] Linking target lib/librte_telemetry.so.24.1 00:03:43.068 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:43.068 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:43.068 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:43.068 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:43.327 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:43.327 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:43.327 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:43.327 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:43.327 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:43.327 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:43.327 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:43.327 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:43.327 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:43.327 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:43.586 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:43.845 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:43.845 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:43.845 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:43.845 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:43.845 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:43.845 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:44.103 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:44.104 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:44.104 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:44.104 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:44.104 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:44.363 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:44.363 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:44.363 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:44.363 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:44.622 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:44.622 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:44.622 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:44.881 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:44.881 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:44.881 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:44.881 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:44.881 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:44.881 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:44.881 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:44.881 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:45.140 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:45.140 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:45.140 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:45.399 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:45.399 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:45.399 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:45.399 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:45.399 [84/268] Linking static target lib/librte_ring.a 00:03:45.399 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:45.658 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:45.658 [87/268] Linking static target lib/librte_eal.a 00:03:45.658 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:45.658 [89/268] Linking static target lib/librte_rcu.a 00:03:45.658 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:45.658 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:45.916 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:45.916 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:45.916 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:45.916 [95/268] Linking static target lib/librte_mempool.a 00:03:45.916 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.174 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:46.174 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:46.174 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:46.174 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:46.174 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.431 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:46.431 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:46.431 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:46.431 [105/268] Linking static target lib/librte_mbuf.a 00:03:46.431 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:46.431 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:46.431 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:46.431 [109/268] Linking static target lib/librte_meter.a 00:03:46.688 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:46.688 [111/268] Linking static target lib/librte_net.a 00:03:46.688 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:46.945 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:46.945 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:46.945 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.945 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:47.203 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.203 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.203 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:47.460 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:47.460 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:47.460 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.718 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:47.718 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:47.718 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:47.718 [126/268] Linking static target lib/librte_pci.a 00:03:47.977 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:47.977 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:47.977 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:47.977 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:47.977 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:47.977 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:48.235 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:48.236 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:48.236 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:48.236 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:48.236 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.236 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:48.236 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:48.236 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:48.236 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:48.236 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:48.236 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:48.236 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:48.236 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:48.494 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:48.494 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:48.494 [148/268] Linking static target lib/librte_ethdev.a 00:03:48.494 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:48.494 [150/268] Linking static target lib/librte_cmdline.a 00:03:48.494 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:48.752 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:48.752 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:48.752 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:48.752 [155/268] Linking static target lib/librte_timer.a 00:03:49.010 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:49.010 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:49.010 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:49.010 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:49.010 [160/268] Linking static target lib/librte_hash.a 00:03:49.010 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:49.010 [162/268] Linking static target lib/librte_compressdev.a 00:03:49.268 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:49.268 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:49.525 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:49.525 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.525 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:49.525 [168/268] Linking static target lib/librte_dmadev.a 00:03:49.525 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:49.784 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:49.784 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:49.784 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:50.041 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:50.041 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:50.041 [175/268] Linking static target lib/librte_cryptodev.a 00:03:50.041 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.299 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.299 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:50.299 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:50.299 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:50.299 [181/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.299 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:50.558 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.558 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:50.558 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:50.559 [186/268] Linking static target lib/librte_power.a 00:03:50.817 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:50.817 [188/268] Linking static target lib/librte_reorder.a 00:03:50.817 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:51.076 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:51.076 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:51.076 [192/268] Linking static target lib/librte_security.a 00:03:51.076 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:51.335 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:51.335 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.902 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:51.902 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:51.902 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.902 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:51.902 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:51.902 [201/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.161 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:52.420 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:52.420 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:52.420 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:52.420 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:52.420 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:52.420 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:52.420 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:52.678 [210/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.678 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:52.678 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:52.678 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:52.678 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:52.937 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:52.937 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:52.937 [217/268] Linking static target drivers/librte_bus_pci.a 00:03:52.937 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:52.938 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:52.938 [220/268] Linking static target drivers/librte_bus_vdev.a 00:03:52.938 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:52.938 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:52.938 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:52.938 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:52.938 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:53.262 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:53.262 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.262 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.828 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:53.828 [230/268] Linking static target lib/librte_vhost.a 00:03:55.204 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.141 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.400 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.400 [234/268] Linking target lib/librte_eal.so.24.1 00:03:56.659 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:56.659 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:56.659 [237/268] Linking target lib/librte_timer.so.24.1 00:03:56.659 [238/268] Linking target lib/librte_pci.so.24.1 00:03:56.659 [239/268] Linking target lib/librte_ring.so.24.1 00:03:56.659 [240/268] Linking target lib/librte_meter.so.24.1 00:03:56.659 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:56.917 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:56.917 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:56.917 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:56.917 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:56.917 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:56.917 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:56.917 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:56.917 [249/268] Linking target lib/librte_mempool.so.24.1 00:03:57.176 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:57.176 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:57.176 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:57.176 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:57.435 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:57.435 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:57.435 [256/268] Linking target lib/librte_net.so.24.1 00:03:57.435 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:57.435 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:57.435 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:57.435 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:57.694 [261/268] Linking target lib/librte_security.so.24.1 00:03:57.694 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:57.694 [263/268] Linking target lib/librte_hash.so.24.1 00:03:57.694 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:57.694 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:57.694 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:57.694 [267/268] Linking target lib/librte_vhost.so.24.1 00:03:57.967 [268/268] Linking target lib/librte_power.so.24.1 00:03:57.967 INFO: autodetecting backend as ninja 00:03:57.967 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:19.896 CC lib/log/log_deprecated.o 00:04:19.896 CC lib/log/log_flags.o 00:04:19.896 CC lib/log/log.o 00:04:19.896 CC lib/ut_mock/mock.o 00:04:19.896 CC lib/ut/ut.o 00:04:19.896 LIB libspdk_ut.a 00:04:19.896 LIB libspdk_log.a 00:04:19.896 LIB libspdk_ut_mock.a 00:04:19.896 SO libspdk_ut.so.2.0 00:04:19.896 SO libspdk_ut_mock.so.6.0 00:04:19.896 SO libspdk_log.so.7.1 00:04:19.896 SYMLINK libspdk_ut_mock.so 00:04:19.896 SYMLINK libspdk_ut.so 00:04:19.896 SYMLINK libspdk_log.so 00:04:19.896 CC lib/ioat/ioat.o 00:04:19.896 CC lib/util/base64.o 00:04:19.896 CC lib/util/bit_array.o 00:04:19.896 CC lib/util/cpuset.o 00:04:19.896 CC lib/util/crc16.o 00:04:19.896 CC lib/util/crc32.o 00:04:19.896 CC lib/util/crc32c.o 00:04:19.896 CXX lib/trace_parser/trace.o 00:04:19.896 CC lib/dma/dma.o 00:04:19.896 CC lib/vfio_user/host/vfio_user_pci.o 00:04:19.896 CC lib/vfio_user/host/vfio_user.o 00:04:19.896 CC lib/util/crc32_ieee.o 00:04:19.896 CC lib/util/crc64.o 00:04:19.896 CC lib/util/dif.o 00:04:19.896 LIB libspdk_dma.a 00:04:19.896 CC lib/util/fd.o 00:04:19.896 SO libspdk_dma.so.5.0 00:04:19.896 CC lib/util/fd_group.o 00:04:19.896 LIB libspdk_ioat.a 00:04:19.896 CC lib/util/file.o 00:04:19.896 CC lib/util/hexlify.o 00:04:19.896 SO libspdk_ioat.so.7.0 00:04:19.896 SYMLINK libspdk_dma.so 00:04:19.896 CC lib/util/iov.o 00:04:19.896 SYMLINK libspdk_ioat.so 00:04:19.896 CC lib/util/math.o 00:04:19.896 CC lib/util/net.o 00:04:19.896 CC lib/util/pipe.o 00:04:19.896 LIB libspdk_vfio_user.a 00:04:19.896 CC lib/util/strerror_tls.o 00:04:19.896 CC lib/util/string.o 00:04:19.896 SO libspdk_vfio_user.so.5.0 00:04:19.896 CC lib/util/uuid.o 00:04:19.896 CC lib/util/xor.o 00:04:19.896 CC lib/util/zipf.o 00:04:19.896 CC lib/util/md5.o 00:04:19.896 SYMLINK libspdk_vfio_user.so 00:04:19.896 LIB libspdk_util.a 00:04:19.896 SO libspdk_util.so.10.1 00:04:19.896 LIB libspdk_trace_parser.a 00:04:19.896 SYMLINK libspdk_util.so 00:04:19.896 SO libspdk_trace_parser.so.6.0 00:04:19.896 SYMLINK libspdk_trace_parser.so 00:04:19.896 CC lib/idxd/idxd.o 00:04:19.896 CC lib/idxd/idxd_user.o 00:04:19.896 CC lib/idxd/idxd_kernel.o 00:04:19.896 CC lib/rdma_utils/rdma_utils.o 00:04:19.896 CC lib/json/json_parse.o 00:04:19.896 CC lib/json/json_util.o 00:04:19.896 CC lib/json/json_write.o 00:04:19.896 CC lib/vmd/vmd.o 00:04:19.896 CC lib/conf/conf.o 00:04:19.896 CC lib/env_dpdk/env.o 00:04:19.896 CC lib/env_dpdk/memory.o 00:04:19.896 CC lib/env_dpdk/pci.o 00:04:19.896 LIB libspdk_conf.a 00:04:19.896 CC lib/env_dpdk/init.o 00:04:19.896 CC lib/env_dpdk/threads.o 00:04:19.896 SO libspdk_conf.so.6.0 00:04:19.896 LIB libspdk_rdma_utils.a 00:04:19.896 LIB libspdk_json.a 00:04:19.896 SO libspdk_rdma_utils.so.1.0 00:04:19.896 SO libspdk_json.so.6.0 00:04:19.896 SYMLINK libspdk_conf.so 00:04:19.896 CC lib/env_dpdk/pci_ioat.o 00:04:19.896 SYMLINK libspdk_rdma_utils.so 00:04:19.896 CC lib/env_dpdk/pci_virtio.o 00:04:19.896 SYMLINK libspdk_json.so 00:04:19.896 CC lib/env_dpdk/pci_vmd.o 00:04:19.896 CC lib/vmd/led.o 00:04:19.896 CC lib/env_dpdk/pci_idxd.o 00:04:19.896 CC lib/env_dpdk/pci_event.o 00:04:19.896 CC lib/env_dpdk/sigbus_handler.o 00:04:19.896 LIB libspdk_idxd.a 00:04:19.896 CC lib/env_dpdk/pci_dpdk.o 00:04:19.896 SO libspdk_idxd.so.12.1 00:04:19.896 LIB libspdk_vmd.a 00:04:19.896 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:19.896 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:19.896 SO libspdk_vmd.so.6.0 00:04:19.896 SYMLINK libspdk_idxd.so 00:04:19.896 SYMLINK libspdk_vmd.so 00:04:19.896 CC lib/rdma_provider/common.o 00:04:19.896 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:19.896 CC lib/jsonrpc/jsonrpc_server.o 00:04:19.896 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:19.896 CC lib/jsonrpc/jsonrpc_client.o 00:04:19.896 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:19.896 LIB libspdk_rdma_provider.a 00:04:19.896 SO libspdk_rdma_provider.so.7.0 00:04:19.896 SYMLINK libspdk_rdma_provider.so 00:04:19.896 LIB libspdk_jsonrpc.a 00:04:19.896 SO libspdk_jsonrpc.so.6.0 00:04:19.896 SYMLINK libspdk_jsonrpc.so 00:04:19.896 LIB libspdk_env_dpdk.a 00:04:19.896 SO libspdk_env_dpdk.so.15.1 00:04:19.896 CC lib/rpc/rpc.o 00:04:19.896 SYMLINK libspdk_env_dpdk.so 00:04:19.896 LIB libspdk_rpc.a 00:04:19.896 SO libspdk_rpc.so.6.0 00:04:19.896 SYMLINK libspdk_rpc.so 00:04:20.155 CC lib/notify/notify.o 00:04:20.155 CC lib/notify/notify_rpc.o 00:04:20.155 CC lib/trace/trace.o 00:04:20.155 CC lib/trace/trace_flags.o 00:04:20.155 CC lib/trace/trace_rpc.o 00:04:20.155 CC lib/keyring/keyring.o 00:04:20.155 CC lib/keyring/keyring_rpc.o 00:04:20.490 LIB libspdk_notify.a 00:04:20.490 SO libspdk_notify.so.6.0 00:04:20.490 LIB libspdk_keyring.a 00:04:20.490 SYMLINK libspdk_notify.so 00:04:20.490 LIB libspdk_trace.a 00:04:20.490 SO libspdk_keyring.so.2.0 00:04:20.490 SO libspdk_trace.so.11.0 00:04:20.490 SYMLINK libspdk_keyring.so 00:04:20.751 SYMLINK libspdk_trace.so 00:04:21.011 CC lib/thread/thread.o 00:04:21.011 CC lib/thread/iobuf.o 00:04:21.011 CC lib/sock/sock.o 00:04:21.011 CC lib/sock/sock_rpc.o 00:04:21.274 LIB libspdk_sock.a 00:04:21.274 SO libspdk_sock.so.10.0 00:04:21.274 SYMLINK libspdk_sock.so 00:04:21.842 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:21.842 CC lib/nvme/nvme_ctrlr.o 00:04:21.842 CC lib/nvme/nvme_ns.o 00:04:21.842 CC lib/nvme/nvme_ns_cmd.o 00:04:21.842 CC lib/nvme/nvme_fabric.o 00:04:21.842 CC lib/nvme/nvme_qpair.o 00:04:21.842 CC lib/nvme/nvme.o 00:04:21.842 CC lib/nvme/nvme_pcie_common.o 00:04:21.842 CC lib/nvme/nvme_pcie.o 00:04:22.408 LIB libspdk_thread.a 00:04:22.408 SO libspdk_thread.so.11.0 00:04:22.408 CC lib/nvme/nvme_quirks.o 00:04:22.408 SYMLINK libspdk_thread.so 00:04:22.408 CC lib/nvme/nvme_transport.o 00:04:22.408 CC lib/nvme/nvme_discovery.o 00:04:22.408 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:22.408 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:22.667 CC lib/nvme/nvme_tcp.o 00:04:22.667 CC lib/nvme/nvme_opal.o 00:04:22.667 CC lib/nvme/nvme_io_msg.o 00:04:22.667 CC lib/nvme/nvme_poll_group.o 00:04:22.926 CC lib/nvme/nvme_zns.o 00:04:22.926 CC lib/nvme/nvme_stubs.o 00:04:22.926 CC lib/nvme/nvme_auth.o 00:04:22.926 CC lib/nvme/nvme_cuse.o 00:04:23.184 CC lib/nvme/nvme_rdma.o 00:04:23.184 CC lib/accel/accel.o 00:04:23.442 CC lib/blob/blobstore.o 00:04:23.442 CC lib/blob/request.o 00:04:23.442 CC lib/init/json_config.o 00:04:23.442 CC lib/blob/zeroes.o 00:04:23.701 CC lib/init/subsystem.o 00:04:23.701 CC lib/init/subsystem_rpc.o 00:04:23.701 CC lib/init/rpc.o 00:04:23.960 CC lib/blob/blob_bs_dev.o 00:04:23.960 CC lib/accel/accel_rpc.o 00:04:23.960 CC lib/virtio/virtio.o 00:04:23.960 CC lib/accel/accel_sw.o 00:04:23.960 CC lib/virtio/virtio_vhost_user.o 00:04:23.960 CC lib/fsdev/fsdev.o 00:04:23.960 LIB libspdk_init.a 00:04:23.960 CC lib/fsdev/fsdev_io.o 00:04:23.960 SO libspdk_init.so.6.0 00:04:23.960 CC lib/fsdev/fsdev_rpc.o 00:04:24.220 SYMLINK libspdk_init.so 00:04:24.220 CC lib/virtio/virtio_vfio_user.o 00:04:24.220 CC lib/virtio/virtio_pci.o 00:04:24.220 LIB libspdk_accel.a 00:04:24.220 SO libspdk_accel.so.16.0 00:04:24.479 SYMLINK libspdk_accel.so 00:04:24.479 CC lib/event/app.o 00:04:24.479 CC lib/event/reactor.o 00:04:24.479 CC lib/event/log_rpc.o 00:04:24.479 CC lib/event/scheduler_static.o 00:04:24.479 CC lib/event/app_rpc.o 00:04:24.479 LIB libspdk_nvme.a 00:04:24.479 LIB libspdk_virtio.a 00:04:24.479 LIB libspdk_fsdev.a 00:04:24.479 CC lib/bdev/bdev.o 00:04:24.479 CC lib/bdev/bdev_rpc.o 00:04:24.479 CC lib/bdev/bdev_zone.o 00:04:24.479 SO libspdk_virtio.so.7.0 00:04:24.738 SO libspdk_fsdev.so.2.0 00:04:24.738 SYMLINK libspdk_virtio.so 00:04:24.738 SO libspdk_nvme.so.15.0 00:04:24.738 CC lib/bdev/part.o 00:04:24.738 CC lib/bdev/scsi_nvme.o 00:04:24.738 SYMLINK libspdk_fsdev.so 00:04:24.738 LIB libspdk_event.a 00:04:24.738 SO libspdk_event.so.14.0 00:04:24.738 SYMLINK libspdk_nvme.so 00:04:24.996 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:24.996 SYMLINK libspdk_event.so 00:04:25.563 LIB libspdk_fuse_dispatcher.a 00:04:25.563 SO libspdk_fuse_dispatcher.so.1.0 00:04:25.563 SYMLINK libspdk_fuse_dispatcher.so 00:04:26.129 LIB libspdk_blob.a 00:04:26.129 SO libspdk_blob.so.11.0 00:04:26.129 SYMLINK libspdk_blob.so 00:04:26.388 CC lib/lvol/lvol.o 00:04:26.388 CC lib/blobfs/tree.o 00:04:26.388 CC lib/blobfs/blobfs.o 00:04:26.956 LIB libspdk_bdev.a 00:04:26.956 SO libspdk_bdev.so.17.0 00:04:26.956 SYMLINK libspdk_bdev.so 00:04:27.214 LIB libspdk_blobfs.a 00:04:27.214 CC lib/scsi/dev.o 00:04:27.214 CC lib/scsi/lun.o 00:04:27.214 CC lib/ublk/ublk.o 00:04:27.214 CC lib/scsi/scsi.o 00:04:27.214 CC lib/scsi/port.o 00:04:27.214 CC lib/nvmf/ctrlr.o 00:04:27.214 CC lib/nbd/nbd.o 00:04:27.214 SO libspdk_blobfs.so.10.0 00:04:27.214 CC lib/ftl/ftl_core.o 00:04:27.214 LIB libspdk_lvol.a 00:04:27.214 SO libspdk_lvol.so.10.0 00:04:27.474 SYMLINK libspdk_blobfs.so 00:04:27.474 CC lib/nbd/nbd_rpc.o 00:04:27.474 CC lib/ublk/ublk_rpc.o 00:04:27.474 SYMLINK libspdk_lvol.so 00:04:27.474 CC lib/scsi/scsi_bdev.o 00:04:27.474 CC lib/scsi/scsi_pr.o 00:04:27.474 CC lib/scsi/scsi_rpc.o 00:04:27.474 CC lib/scsi/task.o 00:04:27.474 CC lib/ftl/ftl_init.o 00:04:27.733 CC lib/ftl/ftl_layout.o 00:04:27.733 LIB libspdk_nbd.a 00:04:27.733 CC lib/ftl/ftl_debug.o 00:04:27.733 CC lib/ftl/ftl_io.o 00:04:27.733 SO libspdk_nbd.so.7.0 00:04:27.733 CC lib/ftl/ftl_sb.o 00:04:27.733 SYMLINK libspdk_nbd.so 00:04:27.733 CC lib/ftl/ftl_l2p.o 00:04:27.733 CC lib/ftl/ftl_l2p_flat.o 00:04:27.733 CC lib/ftl/ftl_nv_cache.o 00:04:27.733 LIB libspdk_ublk.a 00:04:27.992 SO libspdk_ublk.so.3.0 00:04:27.992 CC lib/ftl/ftl_band.o 00:04:27.992 CC lib/nvmf/ctrlr_discovery.o 00:04:27.992 LIB libspdk_scsi.a 00:04:27.992 CC lib/ftl/ftl_band_ops.o 00:04:27.992 CC lib/ftl/ftl_writer.o 00:04:27.992 SYMLINK libspdk_ublk.so 00:04:27.992 CC lib/ftl/ftl_rq.o 00:04:27.992 SO libspdk_scsi.so.9.0 00:04:27.992 CC lib/ftl/ftl_reloc.o 00:04:27.992 CC lib/ftl/ftl_l2p_cache.o 00:04:27.992 SYMLINK libspdk_scsi.so 00:04:28.252 CC lib/nvmf/ctrlr_bdev.o 00:04:28.252 CC lib/ftl/ftl_p2l.o 00:04:28.252 CC lib/iscsi/conn.o 00:04:28.252 CC lib/iscsi/init_grp.o 00:04:28.252 CC lib/iscsi/iscsi.o 00:04:28.252 CC lib/ftl/ftl_p2l_log.o 00:04:28.534 CC lib/iscsi/param.o 00:04:28.534 CC lib/ftl/mngt/ftl_mngt.o 00:04:28.534 CC lib/nvmf/subsystem.o 00:04:28.534 CC lib/vhost/vhost.o 00:04:28.534 CC lib/vhost/vhost_rpc.o 00:04:28.797 CC lib/nvmf/nvmf.o 00:04:28.797 CC lib/nvmf/nvmf_rpc.o 00:04:28.797 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:28.797 CC lib/nvmf/transport.o 00:04:28.797 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:28.797 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:29.056 CC lib/iscsi/portal_grp.o 00:04:29.056 CC lib/iscsi/tgt_node.o 00:04:29.056 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:29.315 CC lib/vhost/vhost_scsi.o 00:04:29.315 CC lib/vhost/vhost_blk.o 00:04:29.315 CC lib/iscsi/iscsi_subsystem.o 00:04:29.315 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:29.573 CC lib/iscsi/iscsi_rpc.o 00:04:29.573 CC lib/iscsi/task.o 00:04:29.573 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:29.573 CC lib/nvmf/tcp.o 00:04:29.573 CC lib/vhost/rte_vhost_user.o 00:04:29.573 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:29.573 CC lib/nvmf/stubs.o 00:04:29.573 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:29.573 CC lib/nvmf/mdns_server.o 00:04:29.831 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:29.831 LIB libspdk_iscsi.a 00:04:29.831 CC lib/nvmf/rdma.o 00:04:29.831 CC lib/nvmf/auth.o 00:04:29.831 SO libspdk_iscsi.so.8.0 00:04:30.089 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:30.089 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:30.089 SYMLINK libspdk_iscsi.so 00:04:30.089 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:30.089 CC lib/ftl/utils/ftl_conf.o 00:04:30.089 CC lib/ftl/utils/ftl_md.o 00:04:30.089 CC lib/ftl/utils/ftl_mempool.o 00:04:30.348 CC lib/ftl/utils/ftl_bitmap.o 00:04:30.348 CC lib/ftl/utils/ftl_property.o 00:04:30.348 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:30.348 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:30.608 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:30.608 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:30.608 LIB libspdk_vhost.a 00:04:30.608 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:30.608 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:30.608 SO libspdk_vhost.so.8.0 00:04:30.608 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:30.608 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:30.608 SYMLINK libspdk_vhost.so 00:04:30.608 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:30.608 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:30.608 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:30.867 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:30.867 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:30.867 CC lib/ftl/base/ftl_base_dev.o 00:04:30.867 CC lib/ftl/base/ftl_base_bdev.o 00:04:30.867 CC lib/ftl/ftl_trace.o 00:04:31.126 LIB libspdk_ftl.a 00:04:31.385 SO libspdk_ftl.so.9.0 00:04:31.644 SYMLINK libspdk_ftl.so 00:04:31.644 LIB libspdk_nvmf.a 00:04:31.901 SO libspdk_nvmf.so.20.0 00:04:31.901 SYMLINK libspdk_nvmf.so 00:04:32.468 CC module/env_dpdk/env_dpdk_rpc.o 00:04:32.468 CC module/accel/error/accel_error.o 00:04:32.468 CC module/fsdev/aio/fsdev_aio.o 00:04:32.468 CC module/accel/dsa/accel_dsa.o 00:04:32.468 CC module/blob/bdev/blob_bdev.o 00:04:32.468 CC module/accel/ioat/accel_ioat.o 00:04:32.468 CC module/accel/iaa/accel_iaa.o 00:04:32.468 CC module/keyring/file/keyring.o 00:04:32.468 CC module/sock/posix/posix.o 00:04:32.468 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:32.468 LIB libspdk_env_dpdk_rpc.a 00:04:32.468 SO libspdk_env_dpdk_rpc.so.6.0 00:04:32.726 CC module/keyring/file/keyring_rpc.o 00:04:32.726 CC module/accel/error/accel_error_rpc.o 00:04:32.726 CC module/accel/ioat/accel_ioat_rpc.o 00:04:32.726 SYMLINK libspdk_env_dpdk_rpc.so 00:04:32.726 CC module/accel/dsa/accel_dsa_rpc.o 00:04:32.726 CC module/accel/iaa/accel_iaa_rpc.o 00:04:32.726 LIB libspdk_scheduler_dynamic.a 00:04:32.726 SO libspdk_scheduler_dynamic.so.4.0 00:04:32.726 LIB libspdk_blob_bdev.a 00:04:32.726 SYMLINK libspdk_scheduler_dynamic.so 00:04:32.726 LIB libspdk_keyring_file.a 00:04:32.726 SO libspdk_blob_bdev.so.11.0 00:04:32.726 LIB libspdk_accel_error.a 00:04:32.726 LIB libspdk_accel_ioat.a 00:04:32.726 LIB libspdk_accel_dsa.a 00:04:32.726 SO libspdk_accel_error.so.2.0 00:04:32.726 SO libspdk_keyring_file.so.2.0 00:04:32.726 SO libspdk_accel_ioat.so.6.0 00:04:32.726 LIB libspdk_accel_iaa.a 00:04:32.726 SYMLINK libspdk_blob_bdev.so 00:04:32.726 SO libspdk_accel_dsa.so.5.0 00:04:32.726 SO libspdk_accel_iaa.so.3.0 00:04:32.726 SYMLINK libspdk_accel_error.so 00:04:32.726 SYMLINK libspdk_keyring_file.so 00:04:32.985 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:32.985 SYMLINK libspdk_accel_ioat.so 00:04:32.985 CC module/fsdev/aio/linux_aio_mgr.o 00:04:32.985 SYMLINK libspdk_accel_dsa.so 00:04:32.985 CC module/keyring/linux/keyring.o 00:04:32.985 SYMLINK libspdk_accel_iaa.so 00:04:32.985 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:32.985 CC module/keyring/linux/keyring_rpc.o 00:04:32.985 LIB libspdk_fsdev_aio.a 00:04:32.985 CC module/scheduler/gscheduler/gscheduler.o 00:04:32.985 LIB libspdk_scheduler_dpdk_governor.a 00:04:32.985 SO libspdk_fsdev_aio.so.1.0 00:04:33.243 CC module/sock/uring/uring.o 00:04:33.244 LIB libspdk_sock_posix.a 00:04:33.244 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:33.244 CC module/bdev/delay/vbdev_delay.o 00:04:33.244 LIB libspdk_keyring_linux.a 00:04:33.244 CC module/blobfs/bdev/blobfs_bdev.o 00:04:33.244 SO libspdk_sock_posix.so.6.0 00:04:33.244 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:33.244 SYMLINK libspdk_fsdev_aio.so 00:04:33.244 SO libspdk_keyring_linux.so.1.0 00:04:33.244 CC module/bdev/error/vbdev_error.o 00:04:33.244 CC module/bdev/error/vbdev_error_rpc.o 00:04:33.244 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:33.244 CC module/bdev/gpt/gpt.o 00:04:33.244 LIB libspdk_scheduler_gscheduler.a 00:04:33.244 SYMLINK libspdk_sock_posix.so 00:04:33.244 SYMLINK libspdk_keyring_linux.so 00:04:33.244 SO libspdk_scheduler_gscheduler.so.4.0 00:04:33.244 SYMLINK libspdk_scheduler_gscheduler.so 00:04:33.244 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:33.502 CC module/bdev/gpt/vbdev_gpt.o 00:04:33.502 CC module/bdev/lvol/vbdev_lvol.o 00:04:33.502 CC module/bdev/malloc/bdev_malloc.o 00:04:33.502 LIB libspdk_bdev_error.a 00:04:33.502 LIB libspdk_bdev_delay.a 00:04:33.502 SO libspdk_bdev_error.so.6.0 00:04:33.502 CC module/bdev/null/bdev_null.o 00:04:33.502 SO libspdk_bdev_delay.so.6.0 00:04:33.502 LIB libspdk_blobfs_bdev.a 00:04:33.502 CC module/bdev/nvme/bdev_nvme.o 00:04:33.502 CC module/bdev/passthru/vbdev_passthru.o 00:04:33.502 SO libspdk_blobfs_bdev.so.6.0 00:04:33.502 SYMLINK libspdk_bdev_error.so 00:04:33.502 SYMLINK libspdk_bdev_delay.so 00:04:33.502 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:33.502 SYMLINK libspdk_blobfs_bdev.so 00:04:33.502 CC module/bdev/null/bdev_null_rpc.o 00:04:33.761 LIB libspdk_bdev_gpt.a 00:04:33.761 SO libspdk_bdev_gpt.so.6.0 00:04:33.761 CC module/bdev/raid/bdev_raid.o 00:04:33.761 CC module/bdev/raid/bdev_raid_rpc.o 00:04:33.761 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:33.761 LIB libspdk_sock_uring.a 00:04:33.761 SYMLINK libspdk_bdev_gpt.so 00:04:33.761 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:33.761 LIB libspdk_bdev_null.a 00:04:33.761 SO libspdk_sock_uring.so.5.0 00:04:33.761 SO libspdk_bdev_null.so.6.0 00:04:33.761 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:33.761 SYMLINK libspdk_sock_uring.so 00:04:33.761 SYMLINK libspdk_bdev_null.so 00:04:33.761 CC module/bdev/raid/bdev_raid_sb.o 00:04:34.020 LIB libspdk_bdev_lvol.a 00:04:34.020 LIB libspdk_bdev_malloc.a 00:04:34.020 SO libspdk_bdev_lvol.so.6.0 00:04:34.020 SO libspdk_bdev_malloc.so.6.0 00:04:34.020 LIB libspdk_bdev_passthru.a 00:04:34.020 SYMLINK libspdk_bdev_malloc.so 00:04:34.020 CC module/bdev/raid/raid0.o 00:04:34.020 SYMLINK libspdk_bdev_lvol.so 00:04:34.020 SO libspdk_bdev_passthru.so.6.0 00:04:34.020 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:34.020 CC module/bdev/split/vbdev_split.o 00:04:34.020 SYMLINK libspdk_bdev_passthru.so 00:04:34.020 CC module/bdev/split/vbdev_split_rpc.o 00:04:34.020 CC module/bdev/raid/raid1.o 00:04:34.020 CC module/bdev/uring/bdev_uring.o 00:04:34.278 CC module/bdev/aio/bdev_aio.o 00:04:34.278 CC module/bdev/uring/bdev_uring_rpc.o 00:04:34.278 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:34.278 LIB libspdk_bdev_split.a 00:04:34.278 SO libspdk_bdev_split.so.6.0 00:04:34.278 CC module/bdev/nvme/nvme_rpc.o 00:04:34.278 CC module/bdev/nvme/bdev_mdns_client.o 00:04:34.278 CC module/bdev/raid/concat.o 00:04:34.278 SYMLINK libspdk_bdev_split.so 00:04:34.278 CC module/bdev/aio/bdev_aio_rpc.o 00:04:34.536 LIB libspdk_bdev_zone_block.a 00:04:34.536 SO libspdk_bdev_zone_block.so.6.0 00:04:34.536 CC module/bdev/nvme/vbdev_opal.o 00:04:34.536 LIB libspdk_bdev_uring.a 00:04:34.536 SYMLINK libspdk_bdev_zone_block.so 00:04:34.536 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:34.536 SO libspdk_bdev_uring.so.6.0 00:04:34.536 CC module/bdev/ftl/bdev_ftl.o 00:04:34.536 LIB libspdk_bdev_aio.a 00:04:34.536 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:34.536 LIB libspdk_bdev_raid.a 00:04:34.536 SYMLINK libspdk_bdev_uring.so 00:04:34.536 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:34.536 SO libspdk_bdev_aio.so.6.0 00:04:34.536 SO libspdk_bdev_raid.so.6.0 00:04:34.795 SYMLINK libspdk_bdev_aio.so 00:04:34.795 CC module/bdev/iscsi/bdev_iscsi.o 00:04:34.795 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:34.795 SYMLINK libspdk_bdev_raid.so 00:04:34.795 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:34.795 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:34.795 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:34.795 LIB libspdk_bdev_ftl.a 00:04:34.795 SO libspdk_bdev_ftl.so.6.0 00:04:34.795 SYMLINK libspdk_bdev_ftl.so 00:04:35.054 LIB libspdk_bdev_iscsi.a 00:04:35.054 SO libspdk_bdev_iscsi.so.6.0 00:04:35.054 SYMLINK libspdk_bdev_iscsi.so 00:04:35.313 LIB libspdk_bdev_virtio.a 00:04:35.313 SO libspdk_bdev_virtio.so.6.0 00:04:35.313 SYMLINK libspdk_bdev_virtio.so 00:04:35.570 LIB libspdk_bdev_nvme.a 00:04:35.829 SO libspdk_bdev_nvme.so.7.1 00:04:35.829 SYMLINK libspdk_bdev_nvme.so 00:04:36.396 CC module/event/subsystems/scheduler/scheduler.o 00:04:36.396 CC module/event/subsystems/vmd/vmd.o 00:04:36.396 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:36.396 CC module/event/subsystems/keyring/keyring.o 00:04:36.396 CC module/event/subsystems/sock/sock.o 00:04:36.396 CC module/event/subsystems/fsdev/fsdev.o 00:04:36.396 CC module/event/subsystems/iobuf/iobuf.o 00:04:36.396 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:36.396 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:36.654 LIB libspdk_event_keyring.a 00:04:36.654 LIB libspdk_event_fsdev.a 00:04:36.654 SO libspdk_event_keyring.so.1.0 00:04:36.654 LIB libspdk_event_vhost_blk.a 00:04:36.654 SO libspdk_event_fsdev.so.1.0 00:04:36.654 LIB libspdk_event_scheduler.a 00:04:36.654 LIB libspdk_event_vmd.a 00:04:36.654 LIB libspdk_event_iobuf.a 00:04:36.654 SO libspdk_event_vhost_blk.so.3.0 00:04:36.654 LIB libspdk_event_sock.a 00:04:36.654 SYMLINK libspdk_event_keyring.so 00:04:36.654 SO libspdk_event_scheduler.so.4.0 00:04:36.654 SYMLINK libspdk_event_fsdev.so 00:04:36.654 SO libspdk_event_sock.so.5.0 00:04:36.654 SO libspdk_event_vmd.so.6.0 00:04:36.654 SO libspdk_event_iobuf.so.3.0 00:04:36.654 SYMLINK libspdk_event_vhost_blk.so 00:04:36.654 SYMLINK libspdk_event_vmd.so 00:04:36.654 SYMLINK libspdk_event_scheduler.so 00:04:36.654 SYMLINK libspdk_event_iobuf.so 00:04:36.654 SYMLINK libspdk_event_sock.so 00:04:36.912 CC module/event/subsystems/accel/accel.o 00:04:37.171 LIB libspdk_event_accel.a 00:04:37.171 SO libspdk_event_accel.so.6.0 00:04:37.171 SYMLINK libspdk_event_accel.so 00:04:37.738 CC module/event/subsystems/bdev/bdev.o 00:04:37.738 LIB libspdk_event_bdev.a 00:04:37.738 SO libspdk_event_bdev.so.6.0 00:04:37.998 SYMLINK libspdk_event_bdev.so 00:04:38.257 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:38.257 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:38.257 CC module/event/subsystems/ublk/ublk.o 00:04:38.257 CC module/event/subsystems/nbd/nbd.o 00:04:38.257 CC module/event/subsystems/scsi/scsi.o 00:04:38.257 LIB libspdk_event_nbd.a 00:04:38.257 LIB libspdk_event_ublk.a 00:04:38.257 LIB libspdk_event_scsi.a 00:04:38.257 SO libspdk_event_nbd.so.6.0 00:04:38.515 SO libspdk_event_ublk.so.3.0 00:04:38.515 SO libspdk_event_scsi.so.6.0 00:04:38.515 SYMLINK libspdk_event_nbd.so 00:04:38.515 SYMLINK libspdk_event_ublk.so 00:04:38.515 LIB libspdk_event_nvmf.a 00:04:38.515 SYMLINK libspdk_event_scsi.so 00:04:38.515 SO libspdk_event_nvmf.so.6.0 00:04:38.515 SYMLINK libspdk_event_nvmf.so 00:04:38.773 CC module/event/subsystems/iscsi/iscsi.o 00:04:38.773 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:39.031 LIB libspdk_event_vhost_scsi.a 00:04:39.031 LIB libspdk_event_iscsi.a 00:04:39.031 SO libspdk_event_vhost_scsi.so.3.0 00:04:39.031 SO libspdk_event_iscsi.so.6.0 00:04:39.031 SYMLINK libspdk_event_vhost_scsi.so 00:04:39.031 SYMLINK libspdk_event_iscsi.so 00:04:39.290 SO libspdk.so.6.0 00:04:39.290 SYMLINK libspdk.so 00:04:39.548 CC app/spdk_nvme_identify/identify.o 00:04:39.548 CC app/trace_record/trace_record.o 00:04:39.548 CC app/spdk_nvme_perf/perf.o 00:04:39.548 CC app/spdk_lspci/spdk_lspci.o 00:04:39.548 CXX app/trace/trace.o 00:04:39.548 CC app/iscsi_tgt/iscsi_tgt.o 00:04:39.548 CC app/nvmf_tgt/nvmf_main.o 00:04:39.548 CC app/spdk_tgt/spdk_tgt.o 00:04:39.806 CC test/thread/poller_perf/poller_perf.o 00:04:39.806 CC examples/util/zipf/zipf.o 00:04:39.806 LINK spdk_lspci 00:04:39.806 LINK nvmf_tgt 00:04:39.806 LINK spdk_trace_record 00:04:39.806 LINK poller_perf 00:04:39.806 LINK iscsi_tgt 00:04:39.806 LINK zipf 00:04:40.064 LINK spdk_tgt 00:04:40.064 CC app/spdk_nvme_discover/discovery_aer.o 00:04:40.064 LINK spdk_trace 00:04:40.064 CC app/spdk_top/spdk_top.o 00:04:40.329 LINK spdk_nvme_discover 00:04:40.329 CC test/dma/test_dma/test_dma.o 00:04:40.329 CC app/spdk_dd/spdk_dd.o 00:04:40.329 CC examples/ioat/perf/perf.o 00:04:40.329 CC app/fio/nvme/fio_plugin.o 00:04:40.329 CC test/app/bdev_svc/bdev_svc.o 00:04:40.329 CC app/fio/bdev/fio_plugin.o 00:04:40.329 LINK spdk_nvme_identify 00:04:40.588 LINK spdk_nvme_perf 00:04:40.588 CC examples/ioat/verify/verify.o 00:04:40.588 LINK ioat_perf 00:04:40.588 LINK bdev_svc 00:04:40.588 LINK spdk_dd 00:04:40.846 LINK verify 00:04:40.846 TEST_HEADER include/spdk/accel.h 00:04:40.846 TEST_HEADER include/spdk/accel_module.h 00:04:40.846 TEST_HEADER include/spdk/assert.h 00:04:40.846 TEST_HEADER include/spdk/barrier.h 00:04:40.846 TEST_HEADER include/spdk/base64.h 00:04:40.846 TEST_HEADER include/spdk/bdev.h 00:04:40.846 LINK test_dma 00:04:40.846 TEST_HEADER include/spdk/bdev_module.h 00:04:40.846 TEST_HEADER include/spdk/bdev_zone.h 00:04:40.846 TEST_HEADER include/spdk/bit_array.h 00:04:40.846 TEST_HEADER include/spdk/bit_pool.h 00:04:40.846 TEST_HEADER include/spdk/blob_bdev.h 00:04:40.846 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:40.846 TEST_HEADER include/spdk/blobfs.h 00:04:40.846 TEST_HEADER include/spdk/blob.h 00:04:40.846 TEST_HEADER include/spdk/conf.h 00:04:40.846 TEST_HEADER include/spdk/config.h 00:04:40.846 TEST_HEADER include/spdk/cpuset.h 00:04:40.846 TEST_HEADER include/spdk/crc16.h 00:04:40.846 TEST_HEADER include/spdk/crc32.h 00:04:40.846 TEST_HEADER include/spdk/crc64.h 00:04:40.846 TEST_HEADER include/spdk/dif.h 00:04:40.846 TEST_HEADER include/spdk/dma.h 00:04:40.846 TEST_HEADER include/spdk/endian.h 00:04:40.846 TEST_HEADER include/spdk/env_dpdk.h 00:04:40.846 TEST_HEADER include/spdk/env.h 00:04:40.846 TEST_HEADER include/spdk/event.h 00:04:40.846 TEST_HEADER include/spdk/fd_group.h 00:04:40.846 TEST_HEADER include/spdk/fd.h 00:04:40.846 TEST_HEADER include/spdk/file.h 00:04:40.846 TEST_HEADER include/spdk/fsdev.h 00:04:40.846 TEST_HEADER include/spdk/fsdev_module.h 00:04:40.846 TEST_HEADER include/spdk/ftl.h 00:04:40.846 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:40.846 TEST_HEADER include/spdk/gpt_spec.h 00:04:40.846 TEST_HEADER include/spdk/hexlify.h 00:04:40.846 TEST_HEADER include/spdk/histogram_data.h 00:04:40.846 CC test/event/event_perf/event_perf.o 00:04:40.846 TEST_HEADER include/spdk/idxd.h 00:04:40.846 TEST_HEADER include/spdk/idxd_spec.h 00:04:40.846 TEST_HEADER include/spdk/init.h 00:04:40.846 TEST_HEADER include/spdk/ioat.h 00:04:40.846 TEST_HEADER include/spdk/ioat_spec.h 00:04:40.846 TEST_HEADER include/spdk/iscsi_spec.h 00:04:40.846 TEST_HEADER include/spdk/json.h 00:04:40.846 TEST_HEADER include/spdk/jsonrpc.h 00:04:40.846 LINK spdk_nvme 00:04:40.846 TEST_HEADER include/spdk/keyring.h 00:04:40.846 TEST_HEADER include/spdk/keyring_module.h 00:04:40.846 TEST_HEADER include/spdk/likely.h 00:04:40.846 LINK spdk_bdev 00:04:40.846 TEST_HEADER include/spdk/log.h 00:04:40.846 TEST_HEADER include/spdk/lvol.h 00:04:40.846 TEST_HEADER include/spdk/md5.h 00:04:40.846 TEST_HEADER include/spdk/memory.h 00:04:40.846 TEST_HEADER include/spdk/mmio.h 00:04:40.846 TEST_HEADER include/spdk/nbd.h 00:04:40.846 TEST_HEADER include/spdk/net.h 00:04:40.846 TEST_HEADER include/spdk/notify.h 00:04:40.846 TEST_HEADER include/spdk/nvme.h 00:04:40.846 TEST_HEADER include/spdk/nvme_intel.h 00:04:40.846 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:40.846 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:40.846 TEST_HEADER include/spdk/nvme_spec.h 00:04:40.846 TEST_HEADER include/spdk/nvme_zns.h 00:04:40.846 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:40.846 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:40.846 TEST_HEADER include/spdk/nvmf.h 00:04:40.846 TEST_HEADER include/spdk/nvmf_spec.h 00:04:40.846 TEST_HEADER include/spdk/nvmf_transport.h 00:04:40.846 TEST_HEADER include/spdk/opal.h 00:04:40.846 TEST_HEADER include/spdk/opal_spec.h 00:04:40.846 TEST_HEADER include/spdk/pci_ids.h 00:04:40.847 TEST_HEADER include/spdk/pipe.h 00:04:40.847 TEST_HEADER include/spdk/queue.h 00:04:40.847 CC test/env/mem_callbacks/mem_callbacks.o 00:04:40.847 TEST_HEADER include/spdk/reduce.h 00:04:40.847 TEST_HEADER include/spdk/rpc.h 00:04:40.847 TEST_HEADER include/spdk/scheduler.h 00:04:40.847 TEST_HEADER include/spdk/scsi.h 00:04:40.847 TEST_HEADER include/spdk/scsi_spec.h 00:04:41.105 TEST_HEADER include/spdk/sock.h 00:04:41.105 TEST_HEADER include/spdk/stdinc.h 00:04:41.105 TEST_HEADER include/spdk/string.h 00:04:41.105 TEST_HEADER include/spdk/thread.h 00:04:41.105 TEST_HEADER include/spdk/trace.h 00:04:41.105 TEST_HEADER include/spdk/trace_parser.h 00:04:41.105 TEST_HEADER include/spdk/tree.h 00:04:41.105 TEST_HEADER include/spdk/ublk.h 00:04:41.105 TEST_HEADER include/spdk/util.h 00:04:41.105 TEST_HEADER include/spdk/uuid.h 00:04:41.105 TEST_HEADER include/spdk/version.h 00:04:41.105 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:41.105 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:41.105 TEST_HEADER include/spdk/vhost.h 00:04:41.105 TEST_HEADER include/spdk/vmd.h 00:04:41.105 TEST_HEADER include/spdk/xor.h 00:04:41.105 TEST_HEADER include/spdk/zipf.h 00:04:41.105 CXX test/cpp_headers/accel.o 00:04:41.105 LINK event_perf 00:04:41.105 CC app/vhost/vhost.o 00:04:41.105 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:41.105 CC test/app/histogram_perf/histogram_perf.o 00:04:41.105 LINK spdk_top 00:04:41.105 CC examples/vmd/led/led.o 00:04:41.105 CC examples/vmd/lsvmd/lsvmd.o 00:04:41.105 CC test/app/jsoncat/jsoncat.o 00:04:41.105 CXX test/cpp_headers/accel_module.o 00:04:41.105 LINK histogram_perf 00:04:41.363 CXX test/cpp_headers/assert.o 00:04:41.363 LINK vhost 00:04:41.363 LINK led 00:04:41.363 LINK lsvmd 00:04:41.363 LINK jsoncat 00:04:41.363 CC test/event/reactor/reactor.o 00:04:41.363 CXX test/cpp_headers/barrier.o 00:04:41.363 CXX test/cpp_headers/base64.o 00:04:41.363 LINK reactor 00:04:41.363 LINK nvme_fuzz 00:04:41.622 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:41.622 LINK mem_callbacks 00:04:41.622 CC examples/idxd/perf/perf.o 00:04:41.622 CC test/app/stub/stub.o 00:04:41.622 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:41.622 CXX test/cpp_headers/bdev.o 00:04:41.622 CC test/event/reactor_perf/reactor_perf.o 00:04:41.622 CC examples/thread/thread/thread_ex.o 00:04:41.622 LINK interrupt_tgt 00:04:41.622 CC test/event/app_repeat/app_repeat.o 00:04:41.622 LINK stub 00:04:41.622 CC test/env/vtophys/vtophys.o 00:04:41.879 CC test/event/scheduler/scheduler.o 00:04:41.879 CXX test/cpp_headers/bdev_module.o 00:04:41.879 LINK reactor_perf 00:04:41.879 LINK idxd_perf 00:04:41.879 LINK app_repeat 00:04:41.879 CXX test/cpp_headers/bdev_zone.o 00:04:41.879 LINK vtophys 00:04:41.879 LINK thread 00:04:41.879 CXX test/cpp_headers/bit_array.o 00:04:41.879 LINK scheduler 00:04:42.137 CXX test/cpp_headers/bit_pool.o 00:04:42.137 CXX test/cpp_headers/blob_bdev.o 00:04:42.137 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:42.137 CC examples/sock/hello_world/hello_sock.o 00:04:42.137 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:42.137 CXX test/cpp_headers/blobfs_bdev.o 00:04:42.137 CXX test/cpp_headers/blobfs.o 00:04:42.137 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:42.137 CC test/rpc_client/rpc_client_test.o 00:04:42.394 LINK env_dpdk_post_init 00:04:42.394 LINK hello_sock 00:04:42.394 CXX test/cpp_headers/blob.o 00:04:42.394 LINK rpc_client_test 00:04:42.394 CC test/accel/dif/dif.o 00:04:42.394 CC examples/accel/perf/accel_perf.o 00:04:42.394 CC test/env/memory/memory_ut.o 00:04:42.394 CC examples/blob/hello_world/hello_blob.o 00:04:42.652 CXX test/cpp_headers/conf.o 00:04:42.652 LINK vhost_fuzz 00:04:42.652 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:42.652 CC examples/nvme/hello_world/hello_world.o 00:04:42.652 LINK hello_blob 00:04:42.652 CXX test/cpp_headers/config.o 00:04:42.652 CC test/blobfs/mkfs/mkfs.o 00:04:42.652 CXX test/cpp_headers/cpuset.o 00:04:42.910 CC test/env/pci/pci_ut.o 00:04:42.910 LINK accel_perf 00:04:42.910 CXX test/cpp_headers/crc16.o 00:04:42.910 LINK hello_world 00:04:42.910 LINK mkfs 00:04:42.910 LINK hello_fsdev 00:04:42.910 LINK dif 00:04:43.168 CC examples/blob/cli/blobcli.o 00:04:43.168 LINK iscsi_fuzz 00:04:43.168 CXX test/cpp_headers/crc32.o 00:04:43.168 CXX test/cpp_headers/crc64.o 00:04:43.168 CC examples/nvme/reconnect/reconnect.o 00:04:43.168 LINK pci_ut 00:04:43.168 CXX test/cpp_headers/dif.o 00:04:43.168 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:43.168 CC examples/bdev/hello_world/hello_bdev.o 00:04:43.426 CXX test/cpp_headers/dma.o 00:04:43.426 CC examples/nvme/arbitration/arbitration.o 00:04:43.426 CXX test/cpp_headers/endian.o 00:04:43.426 CXX test/cpp_headers/env_dpdk.o 00:04:43.426 LINK blobcli 00:04:43.426 LINK memory_ut 00:04:43.426 LINK hello_bdev 00:04:43.426 LINK reconnect 00:04:43.684 CC test/lvol/esnap/esnap.o 00:04:43.684 CXX test/cpp_headers/env.o 00:04:43.684 LINK arbitration 00:04:43.684 CXX test/cpp_headers/event.o 00:04:43.684 CXX test/cpp_headers/fd_group.o 00:04:43.684 CC test/nvme/aer/aer.o 00:04:43.684 LINK nvme_manage 00:04:43.942 CC test/bdev/bdevio/bdevio.o 00:04:43.942 CC examples/nvme/hotplug/hotplug.o 00:04:43.942 CC examples/bdev/bdevperf/bdevperf.o 00:04:43.942 CXX test/cpp_headers/fd.o 00:04:43.942 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:43.942 CC examples/nvme/abort/abort.o 00:04:43.942 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:43.942 LINK aer 00:04:43.942 CC test/nvme/reset/reset.o 00:04:43.942 CXX test/cpp_headers/file.o 00:04:44.200 LINK hotplug 00:04:44.200 LINK cmb_copy 00:04:44.200 LINK pmr_persistence 00:04:44.200 LINK bdevio 00:04:44.200 CXX test/cpp_headers/fsdev.o 00:04:44.200 CXX test/cpp_headers/fsdev_module.o 00:04:44.200 LINK reset 00:04:44.200 CC test/nvme/sgl/sgl.o 00:04:44.200 LINK abort 00:04:44.457 CC test/nvme/e2edp/nvme_dp.o 00:04:44.457 CC test/nvme/overhead/overhead.o 00:04:44.457 CXX test/cpp_headers/ftl.o 00:04:44.457 CC test/nvme/err_injection/err_injection.o 00:04:44.457 CC test/nvme/startup/startup.o 00:04:44.457 LINK sgl 00:04:44.457 CC test/nvme/simple_copy/simple_copy.o 00:04:44.457 CC test/nvme/reserve/reserve.o 00:04:44.715 LINK bdevperf 00:04:44.715 LINK nvme_dp 00:04:44.715 CXX test/cpp_headers/fuse_dispatcher.o 00:04:44.715 LINK startup 00:04:44.715 LINK err_injection 00:04:44.715 LINK overhead 00:04:44.716 CXX test/cpp_headers/gpt_spec.o 00:04:44.716 LINK reserve 00:04:44.716 LINK simple_copy 00:04:44.716 CXX test/cpp_headers/hexlify.o 00:04:44.974 CC test/nvme/connect_stress/connect_stress.o 00:04:44.974 CXX test/cpp_headers/histogram_data.o 00:04:44.974 CC test/nvme/compliance/nvme_compliance.o 00:04:44.974 CC test/nvme/boot_partition/boot_partition.o 00:04:44.974 CC test/nvme/fused_ordering/fused_ordering.o 00:04:44.974 CXX test/cpp_headers/idxd.o 00:04:44.974 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:44.974 CC test/nvme/fdp/fdp.o 00:04:44.974 CC examples/nvmf/nvmf/nvmf.o 00:04:45.232 LINK connect_stress 00:04:45.232 CXX test/cpp_headers/idxd_spec.o 00:04:45.232 LINK boot_partition 00:04:45.232 CXX test/cpp_headers/init.o 00:04:45.232 LINK fused_ordering 00:04:45.232 LINK doorbell_aers 00:04:45.232 LINK nvme_compliance 00:04:45.232 CXX test/cpp_headers/ioat.o 00:04:45.232 CXX test/cpp_headers/ioat_spec.o 00:04:45.232 CXX test/cpp_headers/iscsi_spec.o 00:04:45.232 CXX test/cpp_headers/json.o 00:04:45.490 CC test/nvme/cuse/cuse.o 00:04:45.490 CXX test/cpp_headers/jsonrpc.o 00:04:45.490 LINK nvmf 00:04:45.490 LINK fdp 00:04:45.490 CXX test/cpp_headers/keyring.o 00:04:45.490 CXX test/cpp_headers/keyring_module.o 00:04:45.490 CXX test/cpp_headers/likely.o 00:04:45.490 CXX test/cpp_headers/log.o 00:04:45.490 CXX test/cpp_headers/lvol.o 00:04:45.490 CXX test/cpp_headers/md5.o 00:04:45.490 CXX test/cpp_headers/memory.o 00:04:45.490 CXX test/cpp_headers/mmio.o 00:04:45.490 CXX test/cpp_headers/nbd.o 00:04:45.490 CXX test/cpp_headers/net.o 00:04:45.490 CXX test/cpp_headers/notify.o 00:04:45.749 CXX test/cpp_headers/nvme.o 00:04:45.749 CXX test/cpp_headers/nvme_intel.o 00:04:45.749 CXX test/cpp_headers/nvme_ocssd.o 00:04:45.749 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:45.749 CXX test/cpp_headers/nvme_spec.o 00:04:45.749 CXX test/cpp_headers/nvme_zns.o 00:04:45.749 CXX test/cpp_headers/nvmf_cmd.o 00:04:45.749 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:45.749 CXX test/cpp_headers/nvmf.o 00:04:45.749 CXX test/cpp_headers/nvmf_spec.o 00:04:45.749 CXX test/cpp_headers/nvmf_transport.o 00:04:46.008 CXX test/cpp_headers/opal.o 00:04:46.008 CXX test/cpp_headers/opal_spec.o 00:04:46.008 CXX test/cpp_headers/pci_ids.o 00:04:46.008 CXX test/cpp_headers/pipe.o 00:04:46.008 CXX test/cpp_headers/queue.o 00:04:46.008 CXX test/cpp_headers/reduce.o 00:04:46.008 CXX test/cpp_headers/rpc.o 00:04:46.008 CXX test/cpp_headers/scheduler.o 00:04:46.008 CXX test/cpp_headers/scsi.o 00:04:46.008 CXX test/cpp_headers/scsi_spec.o 00:04:46.008 CXX test/cpp_headers/sock.o 00:04:46.008 CXX test/cpp_headers/stdinc.o 00:04:46.266 CXX test/cpp_headers/string.o 00:04:46.266 CXX test/cpp_headers/thread.o 00:04:46.266 CXX test/cpp_headers/trace.o 00:04:46.266 CXX test/cpp_headers/trace_parser.o 00:04:46.266 CXX test/cpp_headers/tree.o 00:04:46.266 CXX test/cpp_headers/ublk.o 00:04:46.266 CXX test/cpp_headers/util.o 00:04:46.266 CXX test/cpp_headers/uuid.o 00:04:46.266 CXX test/cpp_headers/version.o 00:04:46.266 CXX test/cpp_headers/vfio_user_pci.o 00:04:46.266 CXX test/cpp_headers/vfio_user_spec.o 00:04:46.266 CXX test/cpp_headers/vhost.o 00:04:46.266 CXX test/cpp_headers/vmd.o 00:04:46.266 CXX test/cpp_headers/xor.o 00:04:46.525 CXX test/cpp_headers/zipf.o 00:04:46.525 LINK cuse 00:04:48.432 LINK esnap 00:04:48.432 00:04:48.432 real 1m19.121s 00:04:48.432 user 6m53.403s 00:04:48.432 sys 1m46.793s 00:04:48.432 ************************************ 00:04:48.432 END TEST make 00:04:48.432 ************************************ 00:04:48.432 07:32:06 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:48.432 07:32:06 make -- common/autotest_common.sh@10 -- $ set +x 00:04:48.432 07:32:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:48.432 07:32:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:48.432 07:32:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:48.432 07:32:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.432 07:32:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:48.432 07:32:06 -- pm/common@44 -- $ pid=5305 00:04:48.432 07:32:06 -- pm/common@50 -- $ kill -TERM 5305 00:04:48.432 07:32:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.432 07:32:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:48.432 07:32:06 -- pm/common@44 -- $ pid=5307 00:04:48.432 07:32:06 -- pm/common@50 -- $ kill -TERM 5307 00:04:48.432 07:32:06 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:48.432 07:32:06 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:48.692 07:32:06 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:48.692 07:32:06 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:48.692 07:32:06 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:48.692 07:32:06 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:48.692 07:32:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.692 07:32:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.692 07:32:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.692 07:32:06 -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.692 07:32:06 -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.692 07:32:06 -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.692 07:32:06 -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.692 07:32:06 -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.692 07:32:06 -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.692 07:32:06 -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.692 07:32:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.692 07:32:06 -- scripts/common.sh@344 -- # case "$op" in 00:04:48.692 07:32:06 -- scripts/common.sh@345 -- # : 1 00:04:48.692 07:32:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.692 07:32:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.692 07:32:06 -- scripts/common.sh@365 -- # decimal 1 00:04:48.692 07:32:06 -- scripts/common.sh@353 -- # local d=1 00:04:48.692 07:32:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.692 07:32:06 -- scripts/common.sh@355 -- # echo 1 00:04:48.692 07:32:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.693 07:32:06 -- scripts/common.sh@366 -- # decimal 2 00:04:48.693 07:32:06 -- scripts/common.sh@353 -- # local d=2 00:04:48.693 07:32:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.693 07:32:06 -- scripts/common.sh@355 -- # echo 2 00:04:48.693 07:32:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.693 07:32:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.693 07:32:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.693 07:32:06 -- scripts/common.sh@368 -- # return 0 00:04:48.693 07:32:06 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.693 07:32:06 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:48.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.693 --rc genhtml_branch_coverage=1 00:04:48.693 --rc genhtml_function_coverage=1 00:04:48.693 --rc genhtml_legend=1 00:04:48.693 --rc geninfo_all_blocks=1 00:04:48.693 --rc geninfo_unexecuted_blocks=1 00:04:48.693 00:04:48.693 ' 00:04:48.693 07:32:06 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:48.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.693 --rc genhtml_branch_coverage=1 00:04:48.693 --rc genhtml_function_coverage=1 00:04:48.693 --rc genhtml_legend=1 00:04:48.693 --rc geninfo_all_blocks=1 00:04:48.693 --rc geninfo_unexecuted_blocks=1 00:04:48.693 00:04:48.693 ' 00:04:48.693 07:32:06 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:48.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.693 --rc genhtml_branch_coverage=1 00:04:48.693 --rc genhtml_function_coverage=1 00:04:48.693 --rc genhtml_legend=1 00:04:48.693 --rc geninfo_all_blocks=1 00:04:48.693 --rc geninfo_unexecuted_blocks=1 00:04:48.693 00:04:48.693 ' 00:04:48.693 07:32:06 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:48.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.693 --rc genhtml_branch_coverage=1 00:04:48.693 --rc genhtml_function_coverage=1 00:04:48.693 --rc genhtml_legend=1 00:04:48.693 --rc geninfo_all_blocks=1 00:04:48.693 --rc geninfo_unexecuted_blocks=1 00:04:48.693 00:04:48.693 ' 00:04:48.693 07:32:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:48.693 07:32:06 -- nvmf/common.sh@7 -- # uname -s 00:04:48.693 07:32:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.693 07:32:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.693 07:32:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.693 07:32:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.693 07:32:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.693 07:32:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.693 07:32:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.693 07:32:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.693 07:32:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.693 07:32:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.693 07:32:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:04:48.693 07:32:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:04:48.693 07:32:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.693 07:32:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.693 07:32:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:48.693 07:32:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.693 07:32:06 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:48.693 07:32:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:48.693 07:32:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.693 07:32:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.693 07:32:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.693 07:32:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.693 07:32:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.693 07:32:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.693 07:32:06 -- paths/export.sh@5 -- # export PATH 00:04:48.693 07:32:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.693 07:32:06 -- nvmf/common.sh@51 -- # : 0 00:04:48.693 07:32:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:48.693 07:32:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:48.693 07:32:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.693 07:32:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.693 07:32:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.693 07:32:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:48.693 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:48.693 07:32:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:48.693 07:32:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:48.693 07:32:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:48.693 07:32:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:48.693 07:32:06 -- spdk/autotest.sh@32 -- # uname -s 00:04:48.693 07:32:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:48.693 07:32:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:48.693 07:32:06 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:48.693 07:32:06 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:48.693 07:32:06 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:48.693 07:32:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:48.693 07:32:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:48.693 07:32:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:48.693 07:32:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:48.693 07:32:06 -- spdk/autotest.sh@48 -- # udevadm_pid=54320 00:04:48.693 07:32:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:48.693 07:32:06 -- pm/common@17 -- # local monitor 00:04:48.693 07:32:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.693 07:32:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.693 07:32:06 -- pm/common@25 -- # sleep 1 00:04:48.693 07:32:06 -- pm/common@21 -- # date +%s 00:04:48.693 07:32:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731051126 00:04:48.693 07:32:06 -- pm/common@21 -- # date +%s 00:04:48.693 07:32:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731051126 00:04:49.064 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731051126_collect-cpu-load.pm.log 00:04:49.064 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731051126_collect-vmstat.pm.log 00:04:50.003 07:32:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:50.003 07:32:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:50.003 07:32:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.003 07:32:07 -- common/autotest_common.sh@10 -- # set +x 00:04:50.003 07:32:07 -- spdk/autotest.sh@59 -- # create_test_list 00:04:50.003 07:32:07 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:50.003 07:32:07 -- common/autotest_common.sh@10 -- # set +x 00:04:50.003 07:32:07 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:50.003 07:32:07 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:50.003 07:32:07 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:50.003 07:32:07 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:50.003 07:32:07 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:50.003 07:32:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:50.003 07:32:07 -- common/autotest_common.sh@1455 -- # uname 00:04:50.003 07:32:07 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:50.003 07:32:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:50.003 07:32:07 -- common/autotest_common.sh@1475 -- # uname 00:04:50.003 07:32:07 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:50.003 07:32:07 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:50.003 07:32:07 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:50.003 lcov: LCOV version 1.15 00:04:50.003 07:32:07 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:08.107 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:08.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:22.992 07:32:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:22.992 07:32:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:22.992 07:32:40 -- common/autotest_common.sh@10 -- # set +x 00:05:22.992 07:32:40 -- spdk/autotest.sh@78 -- # rm -f 00:05:22.992 07:32:40 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.295 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:23.295 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:23.295 07:32:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:23.295 07:32:41 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:23.295 07:32:41 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:23.296 07:32:41 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:23.296 07:32:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:23.296 07:32:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:23.296 07:32:41 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:23.296 07:32:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:23.296 07:32:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:23.296 07:32:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:23.296 07:32:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:23.296 07:32:41 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:23.296 07:32:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:23.296 07:32:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:23.296 07:32:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:23.296 07:32:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:23.296 07:32:41 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:23.296 07:32:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:23.296 07:32:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:23.296 07:32:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:23.296 07:32:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:23.296 07:32:41 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:23.296 07:32:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:23.296 07:32:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:23.296 07:32:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:23.296 07:32:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.296 07:32:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.296 07:32:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:23.296 07:32:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:23.296 07:32:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:23.576 No valid GPT data, bailing 00:05:23.576 07:32:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:23.576 07:32:41 -- scripts/common.sh@394 -- # pt= 00:05:23.576 07:32:41 -- scripts/common.sh@395 -- # return 1 00:05:23.576 07:32:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:23.576 1+0 records in 00:05:23.576 1+0 records out 00:05:23.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005667 s, 185 MB/s 00:05:23.576 07:32:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.576 07:32:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.576 07:32:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:23.576 07:32:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:23.576 07:32:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:23.576 No valid GPT data, bailing 00:05:23.576 07:32:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:23.576 07:32:41 -- scripts/common.sh@394 -- # pt= 00:05:23.576 07:32:41 -- scripts/common.sh@395 -- # return 1 00:05:23.576 07:32:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:23.576 1+0 records in 00:05:23.576 1+0 records out 00:05:23.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00544367 s, 193 MB/s 00:05:23.576 07:32:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.576 07:32:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.576 07:32:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:23.576 07:32:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:23.576 07:32:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:23.576 No valid GPT data, bailing 00:05:23.576 07:32:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:23.576 07:32:41 -- scripts/common.sh@394 -- # pt= 00:05:23.576 07:32:41 -- scripts/common.sh@395 -- # return 1 00:05:23.576 07:32:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:23.576 1+0 records in 00:05:23.576 1+0 records out 00:05:23.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532466 s, 197 MB/s 00:05:23.576 07:32:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.576 07:32:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.576 07:32:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:23.576 07:32:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:23.576 07:32:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:23.835 No valid GPT data, bailing 00:05:23.835 07:32:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:23.835 07:32:41 -- scripts/common.sh@394 -- # pt= 00:05:23.835 07:32:41 -- scripts/common.sh@395 -- # return 1 00:05:23.835 07:32:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:23.835 1+0 records in 00:05:23.835 1+0 records out 00:05:23.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385723 s, 272 MB/s 00:05:23.835 07:32:41 -- spdk/autotest.sh@105 -- # sync 00:05:23.835 07:32:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:23.835 07:32:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:23.835 07:32:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:26.369 07:32:43 -- spdk/autotest.sh@111 -- # uname -s 00:05:26.369 07:32:43 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:26.369 07:32:43 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:26.369 07:32:43 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:26.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.936 Hugepages 00:05:26.936 node hugesize free / total 00:05:26.936 node0 1048576kB 0 / 0 00:05:26.936 node0 2048kB 0 / 0 00:05:26.936 00:05:26.936 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:26.936 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:26.936 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:27.195 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:27.195 07:32:44 -- spdk/autotest.sh@117 -- # uname -s 00:05:27.195 07:32:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:27.195 07:32:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:27.195 07:32:44 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.022 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.022 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.022 07:32:45 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:29.422 07:32:46 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:29.422 07:32:46 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:29.422 07:32:46 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:29.422 07:32:46 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:29.422 07:32:46 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:29.422 07:32:46 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:29.422 07:32:46 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.422 07:32:46 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:29.422 07:32:46 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:29.422 07:32:47 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:29.422 07:32:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:29.422 07:32:47 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.680 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.680 Waiting for block devices as requested 00:05:29.680 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:29.680 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:29.984 07:32:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:29.984 07:32:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:29.984 07:32:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:29.984 07:32:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:29.984 07:32:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:29.984 07:32:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:29.984 07:32:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:29.984 07:32:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:29.984 07:32:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:29.984 07:32:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:29.984 07:32:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:29.984 07:32:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:29.984 07:32:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:29.984 07:32:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:29.984 07:32:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:29.984 07:32:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:29.984 07:32:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:29.984 07:32:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:29.984 07:32:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:29.984 07:32:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:29.984 07:32:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:29.984 07:32:47 -- common/autotest_common.sh@1541 -- # continue 00:05:29.984 07:32:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:29.984 07:32:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:29.984 07:32:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:29.984 07:32:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:29.984 07:32:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:29.984 07:32:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:29.984 07:32:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:29.984 07:32:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:29.984 07:32:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:29.984 07:32:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:29.984 07:32:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:29.984 07:32:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:29.984 07:32:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:29.984 07:32:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:29.984 07:32:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:29.984 07:32:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:29.984 07:32:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:29.984 07:32:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:29.984 07:32:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:29.984 07:32:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:29.984 07:32:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:29.984 07:32:47 -- common/autotest_common.sh@1541 -- # continue 00:05:29.984 07:32:47 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:29.984 07:32:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:29.984 07:32:47 -- common/autotest_common.sh@10 -- # set +x 00:05:29.984 07:32:47 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:29.984 07:32:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.984 07:32:47 -- common/autotest_common.sh@10 -- # set +x 00:05:29.984 07:32:47 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:30.923 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.923 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.923 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.923 07:32:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:30.923 07:32:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.923 07:32:48 -- common/autotest_common.sh@10 -- # set +x 00:05:30.923 07:32:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:30.923 07:32:48 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:30.923 07:32:48 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:30.923 07:32:48 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:30.923 07:32:48 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:30.923 07:32:48 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:30.923 07:32:48 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:30.923 07:32:48 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:30.923 07:32:48 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:30.923 07:32:48 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:30.923 07:32:48 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:30.923 07:32:48 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:30.923 07:32:48 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:31.182 07:32:48 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:31.182 07:32:48 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:31.182 07:32:48 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:31.182 07:32:48 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:31.182 07:32:48 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:31.182 07:32:48 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:31.182 07:32:48 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:31.182 07:32:48 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:31.182 07:32:48 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:31.182 07:32:48 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:31.182 07:32:48 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:31.182 07:32:48 -- common/autotest_common.sh@1570 -- # return 0 00:05:31.182 07:32:48 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:31.182 07:32:48 -- common/autotest_common.sh@1578 -- # return 0 00:05:31.182 07:32:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:31.182 07:32:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:31.182 07:32:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:31.182 07:32:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:31.182 07:32:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:31.182 07:32:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:31.182 07:32:48 -- common/autotest_common.sh@10 -- # set +x 00:05:31.182 07:32:48 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:31.182 07:32:48 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:31.182 07:32:48 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:31.182 07:32:48 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:31.182 07:32:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.182 07:32:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.182 07:32:48 -- common/autotest_common.sh@10 -- # set +x 00:05:31.182 ************************************ 00:05:31.182 START TEST env 00:05:31.182 ************************************ 00:05:31.182 07:32:48 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:31.182 * Looking for test storage... 00:05:31.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:31.182 07:32:49 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.182 07:32:49 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.182 07:32:49 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.441 07:32:49 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.441 07:32:49 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.441 07:32:49 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.441 07:32:49 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.441 07:32:49 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.441 07:32:49 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.441 07:32:49 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.441 07:32:49 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.441 07:32:49 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.441 07:32:49 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.441 07:32:49 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.441 07:32:49 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.441 07:32:49 env -- scripts/common.sh@344 -- # case "$op" in 00:05:31.441 07:32:49 env -- scripts/common.sh@345 -- # : 1 00:05:31.441 07:32:49 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.441 07:32:49 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.441 07:32:49 env -- scripts/common.sh@365 -- # decimal 1 00:05:31.441 07:32:49 env -- scripts/common.sh@353 -- # local d=1 00:05:31.441 07:32:49 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.441 07:32:49 env -- scripts/common.sh@355 -- # echo 1 00:05:31.441 07:32:49 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.441 07:32:49 env -- scripts/common.sh@366 -- # decimal 2 00:05:31.441 07:32:49 env -- scripts/common.sh@353 -- # local d=2 00:05:31.441 07:32:49 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.441 07:32:49 env -- scripts/common.sh@355 -- # echo 2 00:05:31.441 07:32:49 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.441 07:32:49 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.441 07:32:49 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.441 07:32:49 env -- scripts/common.sh@368 -- # return 0 00:05:31.441 07:32:49 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.441 07:32:49 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.441 --rc genhtml_branch_coverage=1 00:05:31.441 --rc genhtml_function_coverage=1 00:05:31.441 --rc genhtml_legend=1 00:05:31.441 --rc geninfo_all_blocks=1 00:05:31.441 --rc geninfo_unexecuted_blocks=1 00:05:31.441 00:05:31.441 ' 00:05:31.441 07:32:49 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.441 --rc genhtml_branch_coverage=1 00:05:31.441 --rc genhtml_function_coverage=1 00:05:31.441 --rc genhtml_legend=1 00:05:31.441 --rc geninfo_all_blocks=1 00:05:31.441 --rc geninfo_unexecuted_blocks=1 00:05:31.441 00:05:31.441 ' 00:05:31.441 07:32:49 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.441 --rc genhtml_branch_coverage=1 00:05:31.441 --rc genhtml_function_coverage=1 00:05:31.441 --rc genhtml_legend=1 00:05:31.441 --rc geninfo_all_blocks=1 00:05:31.441 --rc geninfo_unexecuted_blocks=1 00:05:31.441 00:05:31.441 ' 00:05:31.441 07:32:49 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.441 --rc genhtml_branch_coverage=1 00:05:31.441 --rc genhtml_function_coverage=1 00:05:31.441 --rc genhtml_legend=1 00:05:31.441 --rc geninfo_all_blocks=1 00:05:31.441 --rc geninfo_unexecuted_blocks=1 00:05:31.441 00:05:31.441 ' 00:05:31.441 07:32:49 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:31.441 07:32:49 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.441 07:32:49 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.441 07:32:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.441 ************************************ 00:05:31.441 START TEST env_memory 00:05:31.441 ************************************ 00:05:31.441 07:32:49 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:31.441 00:05:31.441 00:05:31.441 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.441 http://cunit.sourceforge.net/ 00:05:31.441 00:05:31.441 00:05:31.441 Suite: memory 00:05:31.441 Test: alloc and free memory map ...[2024-11-08 07:32:49.219180] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:31.441 passed 00:05:31.441 Test: mem map translation ...[2024-11-08 07:32:49.253007] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:31.441 [2024-11-08 07:32:49.253204] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:31.441 [2024-11-08 07:32:49.253427] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:31.441 [2024-11-08 07:32:49.253677] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:31.441 passed 00:05:31.441 Test: mem map registration ...[2024-11-08 07:32:49.317092] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:31.441 [2024-11-08 07:32:49.317281] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:31.441 passed 00:05:31.701 Test: mem map adjacent registrations ...passed 00:05:31.701 00:05:31.701 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.701 suites 1 1 n/a 0 0 00:05:31.701 tests 4 4 4 0 0 00:05:31.701 asserts 152 152 152 0 n/a 00:05:31.701 00:05:31.701 Elapsed time = 0.216 seconds 00:05:31.701 00:05:31.701 real 0m0.236s 00:05:31.701 user 0m0.210s 00:05:31.701 sys 0m0.020s 00:05:31.701 07:32:49 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.701 ************************************ 00:05:31.701 END TEST env_memory 00:05:31.701 ************************************ 00:05:31.701 07:32:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:31.701 07:32:49 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:31.701 07:32:49 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.701 07:32:49 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.701 07:32:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.701 ************************************ 00:05:31.701 START TEST env_vtophys 00:05:31.701 ************************************ 00:05:31.701 07:32:49 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:31.701 EAL: lib.eal log level changed from notice to debug 00:05:31.701 EAL: Detected lcore 0 as core 0 on socket 0 00:05:31.701 EAL: Detected lcore 1 as core 0 on socket 0 00:05:31.701 EAL: Detected lcore 2 as core 0 on socket 0 00:05:31.701 EAL: Detected lcore 3 as core 0 on socket 0 00:05:31.701 EAL: Detected lcore 4 as core 0 on socket 0 00:05:31.701 EAL: Detected lcore 5 as core 0 on socket 0 00:05:31.701 EAL: Detected lcore 6 as core 0 on socket 0 00:05:31.701 EAL: Detected lcore 7 as core 0 on socket 0 00:05:31.701 EAL: Detected lcore 8 as core 0 on socket 0 00:05:31.701 EAL: Detected lcore 9 as core 0 on socket 0 00:05:31.701 EAL: Maximum logical cores by configuration: 128 00:05:31.701 EAL: Detected CPU lcores: 10 00:05:31.701 EAL: Detected NUMA nodes: 1 00:05:31.701 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:31.701 EAL: Detected shared linkage of DPDK 00:05:31.701 EAL: No shared files mode enabled, IPC will be disabled 00:05:31.701 EAL: Selected IOVA mode 'PA' 00:05:31.701 EAL: Probing VFIO support... 00:05:31.701 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:31.701 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:31.701 EAL: Ask a virtual area of 0x2e000 bytes 00:05:31.701 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:31.701 EAL: Setting up physically contiguous memory... 00:05:31.701 EAL: Setting maximum number of open files to 524288 00:05:31.701 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:31.701 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:31.701 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.701 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:31.701 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.701 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.701 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:31.701 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:31.701 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.701 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:31.701 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.701 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.701 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:31.701 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:31.701 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.701 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:31.701 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.701 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.701 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:31.701 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:31.701 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.701 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:31.701 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.701 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.701 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:31.701 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:31.701 EAL: Hugepages will be freed exactly as allocated. 00:05:31.701 EAL: No shared files mode enabled, IPC is disabled 00:05:31.701 EAL: No shared files mode enabled, IPC is disabled 00:05:31.701 EAL: TSC frequency is ~2100000 KHz 00:05:31.701 EAL: Main lcore 0 is ready (tid=7eff63909a00;cpuset=[0]) 00:05:31.701 EAL: Trying to obtain current memory policy. 00:05:31.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.701 EAL: Restoring previous memory policy: 0 00:05:31.701 EAL: request: mp_malloc_sync 00:05:31.701 EAL: No shared files mode enabled, IPC is disabled 00:05:31.701 EAL: Heap on socket 0 was expanded by 2MB 00:05:31.701 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:31.701 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:31.701 EAL: Mem event callback 'spdk:(nil)' registered 00:05:31.701 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:31.701 00:05:31.701 00:05:31.701 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.701 http://cunit.sourceforge.net/ 00:05:31.701 00:05:31.701 00:05:31.701 Suite: components_suite 00:05:31.701 Test: vtophys_malloc_test ...passed 00:05:31.701 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:31.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.701 EAL: Restoring previous memory policy: 4 00:05:31.701 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.701 EAL: request: mp_malloc_sync 00:05:31.701 EAL: No shared files mode enabled, IPC is disabled 00:05:31.701 EAL: Heap on socket 0 was expanded by 4MB 00:05:31.701 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.701 EAL: request: mp_malloc_sync 00:05:31.701 EAL: No shared files mode enabled, IPC is disabled 00:05:31.701 EAL: Heap on socket 0 was shrunk by 4MB 00:05:31.701 EAL: Trying to obtain current memory policy. 00:05:31.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.701 EAL: Restoring previous memory policy: 4 00:05:31.701 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.701 EAL: request: mp_malloc_sync 00:05:31.701 EAL: No shared files mode enabled, IPC is disabled 00:05:31.701 EAL: Heap on socket 0 was expanded by 6MB 00:05:31.701 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.701 EAL: request: mp_malloc_sync 00:05:31.701 EAL: No shared files mode enabled, IPC is disabled 00:05:31.701 EAL: Heap on socket 0 was shrunk by 6MB 00:05:31.701 EAL: Trying to obtain current memory policy. 00:05:31.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.701 EAL: Restoring previous memory policy: 4 00:05:31.701 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.701 EAL: request: mp_malloc_sync 00:05:31.701 EAL: No shared files mode enabled, IPC is disabled 00:05:31.701 EAL: Heap on socket 0 was expanded by 10MB 00:05:31.701 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.701 EAL: request: mp_malloc_sync 00:05:31.701 EAL: No shared files mode enabled, IPC is disabled 00:05:31.701 EAL: Heap on socket 0 was shrunk by 10MB 00:05:31.701 EAL: Trying to obtain current memory policy. 00:05:31.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.701 EAL: Restoring previous memory policy: 4 00:05:31.701 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.701 EAL: request: mp_malloc_sync 00:05:31.701 EAL: No shared files mode enabled, IPC is disabled 00:05:31.701 EAL: Heap on socket 0 was expanded by 18MB 00:05:31.701 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.701 EAL: request: mp_malloc_sync 00:05:31.701 EAL: No shared files mode enabled, IPC is disabled 00:05:31.701 EAL: Heap on socket 0 was shrunk by 18MB 00:05:31.701 EAL: Trying to obtain current memory policy. 00:05:31.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.701 EAL: Restoring previous memory policy: 4 00:05:31.702 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.702 EAL: request: mp_malloc_sync 00:05:31.702 EAL: No shared files mode enabled, IPC is disabled 00:05:31.702 EAL: Heap on socket 0 was expanded by 34MB 00:05:31.960 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.960 EAL: request: mp_malloc_sync 00:05:31.960 EAL: No shared files mode enabled, IPC is disabled 00:05:31.960 EAL: Heap on socket 0 was shrunk by 34MB 00:05:31.960 EAL: Trying to obtain current memory policy. 00:05:31.960 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.960 EAL: Restoring previous memory policy: 4 00:05:31.960 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.960 EAL: request: mp_malloc_sync 00:05:31.960 EAL: No shared files mode enabled, IPC is disabled 00:05:31.960 EAL: Heap on socket 0 was expanded by 66MB 00:05:31.960 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.960 EAL: request: mp_malloc_sync 00:05:31.960 EAL: No shared files mode enabled, IPC is disabled 00:05:31.960 EAL: Heap on socket 0 was shrunk by 66MB 00:05:31.960 EAL: Trying to obtain current memory policy. 00:05:31.960 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.960 EAL: Restoring previous memory policy: 4 00:05:31.960 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.960 EAL: request: mp_malloc_sync 00:05:31.960 EAL: No shared files mode enabled, IPC is disabled 00:05:31.960 EAL: Heap on socket 0 was expanded by 130MB 00:05:31.960 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.960 EAL: request: mp_malloc_sync 00:05:31.960 EAL: No shared files mode enabled, IPC is disabled 00:05:31.960 EAL: Heap on socket 0 was shrunk by 130MB 00:05:31.960 EAL: Trying to obtain current memory policy. 00:05:31.960 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.960 EAL: Restoring previous memory policy: 4 00:05:31.960 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.960 EAL: request: mp_malloc_sync 00:05:31.960 EAL: No shared files mode enabled, IPC is disabled 00:05:31.960 EAL: Heap on socket 0 was expanded by 258MB 00:05:31.960 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.960 EAL: request: mp_malloc_sync 00:05:31.960 EAL: No shared files mode enabled, IPC is disabled 00:05:31.960 EAL: Heap on socket 0 was shrunk by 258MB 00:05:31.960 EAL: Trying to obtain current memory policy. 00:05:31.960 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.219 EAL: Restoring previous memory policy: 4 00:05:32.219 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.219 EAL: request: mp_malloc_sync 00:05:32.219 EAL: No shared files mode enabled, IPC is disabled 00:05:32.219 EAL: Heap on socket 0 was expanded by 514MB 00:05:32.219 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.478 EAL: request: mp_malloc_sync 00:05:32.478 EAL: No shared files mode enabled, IPC is disabled 00:05:32.478 EAL: Heap on socket 0 was shrunk by 514MB 00:05:32.478 EAL: Trying to obtain current memory policy. 00:05:32.478 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.478 EAL: Restoring previous memory policy: 4 00:05:32.478 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.478 EAL: request: mp_malloc_sync 00:05:32.478 EAL: No shared files mode enabled, IPC is disabled 00:05:32.478 EAL: Heap on socket 0 was expanded by 1026MB 00:05:32.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.995 EAL: request: mp_malloc_sync 00:05:32.995 passed 00:05:32.995 00:05:32.995 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.995 suites 1 1 n/a 0 0 00:05:32.995 tests 2 2 2 0 0 00:05:32.995 asserts 5561 5561 5561 0 n/a 00:05:32.995 00:05:32.995 Elapsed time = 1.035 seconds 00:05:32.995 EAL: No shared files mode enabled, IPC is disabled 00:05:32.995 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:32.995 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.995 EAL: request: mp_malloc_sync 00:05:32.995 EAL: No shared files mode enabled, IPC is disabled 00:05:32.995 EAL: Heap on socket 0 was shrunk by 2MB 00:05:32.995 EAL: No shared files mode enabled, IPC is disabled 00:05:32.995 EAL: No shared files mode enabled, IPC is disabled 00:05:32.995 EAL: No shared files mode enabled, IPC is disabled 00:05:32.995 00:05:32.995 real 0m1.252s 00:05:32.995 user 0m0.668s 00:05:32.995 sys 0m0.448s 00:05:32.995 07:32:50 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:32.995 ************************************ 00:05:32.995 END TEST env_vtophys 00:05:32.995 ************************************ 00:05:32.995 07:32:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:32.995 07:32:50 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:32.995 07:32:50 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:32.996 07:32:50 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:32.996 07:32:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.996 ************************************ 00:05:32.996 START TEST env_pci 00:05:32.996 ************************************ 00:05:32.996 07:32:50 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:32.996 00:05:32.996 00:05:32.996 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.996 http://cunit.sourceforge.net/ 00:05:32.996 00:05:32.996 00:05:32.996 Suite: pci 00:05:32.996 Test: pci_hook ...[2024-11-08 07:32:50.792370] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56583 has claimed it 00:05:32.996 passed 00:05:32.996 00:05:32.996 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.996 suites 1 1 n/a 0 0 00:05:32.996 tests 1 1 1 0 0 00:05:32.996 asserts 25 25 25 0 n/a 00:05:32.996 00:05:32.996 Elapsed time = 0.003 seconds 00:05:32.996 EAL: Cannot find device (10000:00:01.0) 00:05:32.996 EAL: Failed to attach device on primary process 00:05:32.996 00:05:32.996 real 0m0.024s 00:05:32.996 user 0m0.013s 00:05:32.996 sys 0m0.010s 00:05:32.996 07:32:50 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:32.996 ************************************ 00:05:32.996 END TEST env_pci 00:05:32.996 ************************************ 00:05:32.996 07:32:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:32.996 07:32:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:32.996 07:32:50 env -- env/env.sh@15 -- # uname 00:05:32.996 07:32:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:32.996 07:32:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:32.996 07:32:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:32.996 07:32:50 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:32.996 07:32:50 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:32.996 07:32:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.996 ************************************ 00:05:32.996 START TEST env_dpdk_post_init 00:05:32.996 ************************************ 00:05:32.996 07:32:50 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:32.996 EAL: Detected CPU lcores: 10 00:05:32.996 EAL: Detected NUMA nodes: 1 00:05:32.996 EAL: Detected shared linkage of DPDK 00:05:32.996 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:32.996 EAL: Selected IOVA mode 'PA' 00:05:33.255 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:33.255 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:33.255 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:33.255 Starting DPDK initialization... 00:05:33.255 Starting SPDK post initialization... 00:05:33.255 SPDK NVMe probe 00:05:33.255 Attaching to 0000:00:10.0 00:05:33.255 Attaching to 0000:00:11.0 00:05:33.255 Attached to 0000:00:10.0 00:05:33.255 Attached to 0000:00:11.0 00:05:33.255 Cleaning up... 00:05:33.255 ************************************ 00:05:33.255 END TEST env_dpdk_post_init 00:05:33.255 ************************************ 00:05:33.255 00:05:33.255 real 0m0.182s 00:05:33.255 user 0m0.045s 00:05:33.255 sys 0m0.039s 00:05:33.255 07:32:51 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.255 07:32:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.255 07:32:51 env -- env/env.sh@26 -- # uname 00:05:33.255 07:32:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:33.255 07:32:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.255 07:32:51 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.255 07:32:51 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.255 07:32:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.255 ************************************ 00:05:33.255 START TEST env_mem_callbacks 00:05:33.255 ************************************ 00:05:33.255 07:32:51 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.255 EAL: Detected CPU lcores: 10 00:05:33.255 EAL: Detected NUMA nodes: 1 00:05:33.255 EAL: Detected shared linkage of DPDK 00:05:33.255 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.255 EAL: Selected IOVA mode 'PA' 00:05:33.514 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:33.514 00:05:33.514 00:05:33.514 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.514 http://cunit.sourceforge.net/ 00:05:33.514 00:05:33.514 00:05:33.514 Suite: memory 00:05:33.514 Test: test ... 00:05:33.514 register 0x200000200000 2097152 00:05:33.514 malloc 3145728 00:05:33.514 register 0x200000400000 4194304 00:05:33.514 buf 0x200000500000 len 3145728 PASSED 00:05:33.514 malloc 64 00:05:33.514 buf 0x2000004fff40 len 64 PASSED 00:05:33.514 malloc 4194304 00:05:33.514 register 0x200000800000 6291456 00:05:33.514 buf 0x200000a00000 len 4194304 PASSED 00:05:33.514 free 0x200000500000 3145728 00:05:33.514 free 0x2000004fff40 64 00:05:33.514 unregister 0x200000400000 4194304 PASSED 00:05:33.514 free 0x200000a00000 4194304 00:05:33.514 unregister 0x200000800000 6291456 PASSED 00:05:33.514 malloc 8388608 00:05:33.514 register 0x200000400000 10485760 00:05:33.514 buf 0x200000600000 len 8388608 PASSED 00:05:33.514 free 0x200000600000 8388608 00:05:33.514 unregister 0x200000400000 10485760 PASSED 00:05:33.514 passed 00:05:33.514 00:05:33.514 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.514 suites 1 1 n/a 0 0 00:05:33.514 tests 1 1 1 0 0 00:05:33.514 asserts 15 15 15 0 n/a 00:05:33.514 00:05:33.514 Elapsed time = 0.008 seconds 00:05:33.514 00:05:33.514 real 0m0.149s 00:05:33.514 user 0m0.015s 00:05:33.514 sys 0m0.032s 00:05:33.514 ************************************ 00:05:33.514 END TEST env_mem_callbacks 00:05:33.514 ************************************ 00:05:33.514 07:32:51 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.514 07:32:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:33.514 ************************************ 00:05:33.514 END TEST env 00:05:33.514 ************************************ 00:05:33.514 00:05:33.514 real 0m2.358s 00:05:33.514 user 0m1.155s 00:05:33.514 sys 0m0.864s 00:05:33.514 07:32:51 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.514 07:32:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.514 07:32:51 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:33.514 07:32:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.514 07:32:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.514 07:32:51 -- common/autotest_common.sh@10 -- # set +x 00:05:33.514 ************************************ 00:05:33.514 START TEST rpc 00:05:33.514 ************************************ 00:05:33.514 07:32:51 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:33.514 * Looking for test storage... 00:05:33.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:33.514 07:32:51 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:33.773 07:32:51 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:33.773 07:32:51 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:33.773 07:32:51 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:33.773 07:32:51 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.774 07:32:51 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.774 07:32:51 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.774 07:32:51 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.774 07:32:51 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.774 07:32:51 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.774 07:32:51 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.774 07:32:51 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.774 07:32:51 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.774 07:32:51 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.774 07:32:51 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.774 07:32:51 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:33.774 07:32:51 rpc -- scripts/common.sh@345 -- # : 1 00:05:33.774 07:32:51 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.774 07:32:51 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.774 07:32:51 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:33.774 07:32:51 rpc -- scripts/common.sh@353 -- # local d=1 00:05:33.774 07:32:51 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.774 07:32:51 rpc -- scripts/common.sh@355 -- # echo 1 00:05:33.774 07:32:51 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.774 07:32:51 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:33.774 07:32:51 rpc -- scripts/common.sh@353 -- # local d=2 00:05:33.774 07:32:51 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.774 07:32:51 rpc -- scripts/common.sh@355 -- # echo 2 00:05:33.774 07:32:51 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.774 07:32:51 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.774 07:32:51 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.774 07:32:51 rpc -- scripts/common.sh@368 -- # return 0 00:05:33.774 07:32:51 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.774 07:32:51 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:33.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.774 --rc genhtml_branch_coverage=1 00:05:33.774 --rc genhtml_function_coverage=1 00:05:33.774 --rc genhtml_legend=1 00:05:33.774 --rc geninfo_all_blocks=1 00:05:33.774 --rc geninfo_unexecuted_blocks=1 00:05:33.774 00:05:33.774 ' 00:05:33.774 07:32:51 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:33.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.774 --rc genhtml_branch_coverage=1 00:05:33.774 --rc genhtml_function_coverage=1 00:05:33.774 --rc genhtml_legend=1 00:05:33.774 --rc geninfo_all_blocks=1 00:05:33.774 --rc geninfo_unexecuted_blocks=1 00:05:33.774 00:05:33.774 ' 00:05:33.774 07:32:51 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:33.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.774 --rc genhtml_branch_coverage=1 00:05:33.774 --rc genhtml_function_coverage=1 00:05:33.774 --rc genhtml_legend=1 00:05:33.774 --rc geninfo_all_blocks=1 00:05:33.774 --rc geninfo_unexecuted_blocks=1 00:05:33.774 00:05:33.774 ' 00:05:33.774 07:32:51 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:33.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.774 --rc genhtml_branch_coverage=1 00:05:33.774 --rc genhtml_function_coverage=1 00:05:33.774 --rc genhtml_legend=1 00:05:33.774 --rc geninfo_all_blocks=1 00:05:33.774 --rc geninfo_unexecuted_blocks=1 00:05:33.774 00:05:33.774 ' 00:05:33.774 07:32:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56706 00:05:33.774 07:32:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.774 07:32:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56706 00:05:33.774 07:32:51 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:33.774 07:32:51 rpc -- common/autotest_common.sh@833 -- # '[' -z 56706 ']' 00:05:33.774 07:32:51 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.774 07:32:51 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:33.774 07:32:51 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.774 07:32:51 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:33.774 07:32:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.774 [2024-11-08 07:32:51.610467] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:05:33.774 [2024-11-08 07:32:51.610749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56706 ] 00:05:34.033 [2024-11-08 07:32:51.752315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.033 [2024-11-08 07:32:51.803370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:34.033 [2024-11-08 07:32:51.803635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56706' to capture a snapshot of events at runtime. 00:05:34.033 [2024-11-08 07:32:51.803744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.033 [2024-11-08 07:32:51.803799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.033 [2024-11-08 07:32:51.803830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56706 for offline analysis/debug. 00:05:34.033 [2024-11-08 07:32:51.804225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.033 [2024-11-08 07:32:51.862570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.602 07:32:52 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:34.602 07:32:52 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:34.602 07:32:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.602 07:32:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.602 07:32:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:34.602 07:32:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:34.602 07:32:52 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:34.602 07:32:52 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:34.602 07:32:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.602 ************************************ 00:05:34.602 START TEST rpc_integrity 00:05:34.602 ************************************ 00:05:34.602 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:34.602 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:34.602 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.602 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.602 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.602 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:34.602 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:34.873 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:34.873 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:34.873 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.873 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.873 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.873 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:34.873 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:34.873 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.873 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.873 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.873 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:34.873 { 00:05:34.873 "name": "Malloc0", 00:05:34.873 "aliases": [ 00:05:34.873 "aeb77ac3-9772-4412-b917-66d175d3d893" 00:05:34.873 ], 00:05:34.873 "product_name": "Malloc disk", 00:05:34.873 "block_size": 512, 00:05:34.873 "num_blocks": 16384, 00:05:34.873 "uuid": "aeb77ac3-9772-4412-b917-66d175d3d893", 00:05:34.873 "assigned_rate_limits": { 00:05:34.873 "rw_ios_per_sec": 0, 00:05:34.873 "rw_mbytes_per_sec": 0, 00:05:34.873 "r_mbytes_per_sec": 0, 00:05:34.873 "w_mbytes_per_sec": 0 00:05:34.873 }, 00:05:34.873 "claimed": false, 00:05:34.873 "zoned": false, 00:05:34.873 "supported_io_types": { 00:05:34.873 "read": true, 00:05:34.873 "write": true, 00:05:34.873 "unmap": true, 00:05:34.873 "flush": true, 00:05:34.873 "reset": true, 00:05:34.873 "nvme_admin": false, 00:05:34.873 "nvme_io": false, 00:05:34.873 "nvme_io_md": false, 00:05:34.873 "write_zeroes": true, 00:05:34.873 "zcopy": true, 00:05:34.873 "get_zone_info": false, 00:05:34.873 "zone_management": false, 00:05:34.873 "zone_append": false, 00:05:34.873 "compare": false, 00:05:34.873 "compare_and_write": false, 00:05:34.873 "abort": true, 00:05:34.873 "seek_hole": false, 00:05:34.873 "seek_data": false, 00:05:34.873 "copy": true, 00:05:34.873 "nvme_iov_md": false 00:05:34.873 }, 00:05:34.873 "memory_domains": [ 00:05:34.873 { 00:05:34.873 "dma_device_id": "system", 00:05:34.873 "dma_device_type": 1 00:05:34.873 }, 00:05:34.873 { 00:05:34.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.873 "dma_device_type": 2 00:05:34.873 } 00:05:34.873 ], 00:05:34.873 "driver_specific": {} 00:05:34.873 } 00:05:34.873 ]' 00:05:34.873 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:34.873 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:34.873 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:34.873 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.873 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.873 [2024-11-08 07:32:52.686171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:34.873 [2024-11-08 07:32:52.686224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.873 [2024-11-08 07:32:52.686245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1631f10 00:05:34.873 [2024-11-08 07:32:52.686256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.873 [2024-11-08 07:32:52.687718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.873 [2024-11-08 07:32:52.687760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:34.873 Passthru0 00:05:34.873 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.873 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:34.873 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.873 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.873 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.873 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:34.873 { 00:05:34.873 "name": "Malloc0", 00:05:34.873 "aliases": [ 00:05:34.873 "aeb77ac3-9772-4412-b917-66d175d3d893" 00:05:34.873 ], 00:05:34.873 "product_name": "Malloc disk", 00:05:34.873 "block_size": 512, 00:05:34.873 "num_blocks": 16384, 00:05:34.873 "uuid": "aeb77ac3-9772-4412-b917-66d175d3d893", 00:05:34.873 "assigned_rate_limits": { 00:05:34.873 "rw_ios_per_sec": 0, 00:05:34.873 "rw_mbytes_per_sec": 0, 00:05:34.873 "r_mbytes_per_sec": 0, 00:05:34.873 "w_mbytes_per_sec": 0 00:05:34.873 }, 00:05:34.873 "claimed": true, 00:05:34.873 "claim_type": "exclusive_write", 00:05:34.873 "zoned": false, 00:05:34.873 "supported_io_types": { 00:05:34.873 "read": true, 00:05:34.873 "write": true, 00:05:34.873 "unmap": true, 00:05:34.873 "flush": true, 00:05:34.873 "reset": true, 00:05:34.873 "nvme_admin": false, 00:05:34.873 "nvme_io": false, 00:05:34.873 "nvme_io_md": false, 00:05:34.873 "write_zeroes": true, 00:05:34.873 "zcopy": true, 00:05:34.873 "get_zone_info": false, 00:05:34.873 "zone_management": false, 00:05:34.873 "zone_append": false, 00:05:34.873 "compare": false, 00:05:34.873 "compare_and_write": false, 00:05:34.873 "abort": true, 00:05:34.873 "seek_hole": false, 00:05:34.873 "seek_data": false, 00:05:34.873 "copy": true, 00:05:34.873 "nvme_iov_md": false 00:05:34.873 }, 00:05:34.873 "memory_domains": [ 00:05:34.873 { 00:05:34.873 "dma_device_id": "system", 00:05:34.873 "dma_device_type": 1 00:05:34.873 }, 00:05:34.873 { 00:05:34.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.873 "dma_device_type": 2 00:05:34.873 } 00:05:34.873 ], 00:05:34.873 "driver_specific": {} 00:05:34.873 }, 00:05:34.873 { 00:05:34.873 "name": "Passthru0", 00:05:34.873 "aliases": [ 00:05:34.873 "598eea1d-9f27-5e66-b6ab-29a5b017239a" 00:05:34.873 ], 00:05:34.873 "product_name": "passthru", 00:05:34.873 "block_size": 512, 00:05:34.873 "num_blocks": 16384, 00:05:34.873 "uuid": "598eea1d-9f27-5e66-b6ab-29a5b017239a", 00:05:34.873 "assigned_rate_limits": { 00:05:34.873 "rw_ios_per_sec": 0, 00:05:34.873 "rw_mbytes_per_sec": 0, 00:05:34.873 "r_mbytes_per_sec": 0, 00:05:34.873 "w_mbytes_per_sec": 0 00:05:34.873 }, 00:05:34.873 "claimed": false, 00:05:34.873 "zoned": false, 00:05:34.873 "supported_io_types": { 00:05:34.873 "read": true, 00:05:34.873 "write": true, 00:05:34.873 "unmap": true, 00:05:34.873 "flush": true, 00:05:34.873 "reset": true, 00:05:34.873 "nvme_admin": false, 00:05:34.873 "nvme_io": false, 00:05:34.873 "nvme_io_md": false, 00:05:34.873 "write_zeroes": true, 00:05:34.873 "zcopy": true, 00:05:34.873 "get_zone_info": false, 00:05:34.873 "zone_management": false, 00:05:34.873 "zone_append": false, 00:05:34.873 "compare": false, 00:05:34.873 "compare_and_write": false, 00:05:34.873 "abort": true, 00:05:34.873 "seek_hole": false, 00:05:34.873 "seek_data": false, 00:05:34.873 "copy": true, 00:05:34.874 "nvme_iov_md": false 00:05:34.874 }, 00:05:34.874 "memory_domains": [ 00:05:34.874 { 00:05:34.874 "dma_device_id": "system", 00:05:34.874 "dma_device_type": 1 00:05:34.874 }, 00:05:34.874 { 00:05:34.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.874 "dma_device_type": 2 00:05:34.874 } 00:05:34.874 ], 00:05:34.874 "driver_specific": { 00:05:34.874 "passthru": { 00:05:34.874 "name": "Passthru0", 00:05:34.874 "base_bdev_name": "Malloc0" 00:05:34.874 } 00:05:34.874 } 00:05:34.874 } 00:05:34.874 ]' 00:05:34.874 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:34.874 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:34.874 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:34.874 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.874 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.874 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.874 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:34.874 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.874 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.874 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.874 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:34.874 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.874 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.874 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.874 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.874 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:35.152 ************************************ 00:05:35.152 END TEST rpc_integrity 00:05:35.152 ************************************ 00:05:35.152 07:32:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:35.152 00:05:35.152 real 0m0.292s 00:05:35.152 user 0m0.180s 00:05:35.152 sys 0m0.051s 00:05:35.152 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:35.152 07:32:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.152 07:32:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:35.152 07:32:52 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:35.152 07:32:52 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.152 07:32:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.152 ************************************ 00:05:35.152 START TEST rpc_plugins 00:05:35.152 ************************************ 00:05:35.152 07:32:52 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:35.152 07:32:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:35.152 07:32:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.152 07:32:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.152 07:32:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.152 07:32:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:35.152 07:32:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:35.152 07:32:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.152 07:32:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.152 07:32:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.152 07:32:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:35.152 { 00:05:35.152 "name": "Malloc1", 00:05:35.152 "aliases": [ 00:05:35.152 "7f3756e6-51be-46a5-a190-777ae24b29bc" 00:05:35.152 ], 00:05:35.152 "product_name": "Malloc disk", 00:05:35.152 "block_size": 4096, 00:05:35.152 "num_blocks": 256, 00:05:35.152 "uuid": "7f3756e6-51be-46a5-a190-777ae24b29bc", 00:05:35.152 "assigned_rate_limits": { 00:05:35.152 "rw_ios_per_sec": 0, 00:05:35.152 "rw_mbytes_per_sec": 0, 00:05:35.152 "r_mbytes_per_sec": 0, 00:05:35.152 "w_mbytes_per_sec": 0 00:05:35.152 }, 00:05:35.152 "claimed": false, 00:05:35.152 "zoned": false, 00:05:35.152 "supported_io_types": { 00:05:35.152 "read": true, 00:05:35.152 "write": true, 00:05:35.152 "unmap": true, 00:05:35.152 "flush": true, 00:05:35.152 "reset": true, 00:05:35.152 "nvme_admin": false, 00:05:35.152 "nvme_io": false, 00:05:35.152 "nvme_io_md": false, 00:05:35.152 "write_zeroes": true, 00:05:35.152 "zcopy": true, 00:05:35.152 "get_zone_info": false, 00:05:35.152 "zone_management": false, 00:05:35.152 "zone_append": false, 00:05:35.152 "compare": false, 00:05:35.152 "compare_and_write": false, 00:05:35.152 "abort": true, 00:05:35.152 "seek_hole": false, 00:05:35.152 "seek_data": false, 00:05:35.152 "copy": true, 00:05:35.152 "nvme_iov_md": false 00:05:35.152 }, 00:05:35.152 "memory_domains": [ 00:05:35.152 { 00:05:35.152 "dma_device_id": "system", 00:05:35.152 "dma_device_type": 1 00:05:35.152 }, 00:05:35.152 { 00:05:35.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.152 "dma_device_type": 2 00:05:35.152 } 00:05:35.152 ], 00:05:35.152 "driver_specific": {} 00:05:35.152 } 00:05:35.152 ]' 00:05:35.152 07:32:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:35.152 07:32:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:35.152 07:32:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:35.152 07:32:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.152 07:32:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.152 07:32:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.152 07:32:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:35.152 07:32:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.152 07:32:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.152 07:32:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.152 07:32:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:35.152 07:32:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:35.152 ************************************ 00:05:35.152 END TEST rpc_plugins 00:05:35.152 ************************************ 00:05:35.152 07:32:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:35.152 00:05:35.152 real 0m0.159s 00:05:35.152 user 0m0.100s 00:05:35.152 sys 0m0.019s 00:05:35.152 07:32:53 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:35.152 07:32:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.153 07:32:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:35.153 07:32:53 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:35.153 07:32:53 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.153 07:32:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.411 ************************************ 00:05:35.411 START TEST rpc_trace_cmd_test 00:05:35.411 ************************************ 00:05:35.411 07:32:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:35.411 07:32:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:35.411 07:32:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:35.411 07:32:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.411 07:32:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:35.411 07:32:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.411 07:32:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:35.411 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56706", 00:05:35.411 "tpoint_group_mask": "0x8", 00:05:35.411 "iscsi_conn": { 00:05:35.411 "mask": "0x2", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "scsi": { 00:05:35.411 "mask": "0x4", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "bdev": { 00:05:35.411 "mask": "0x8", 00:05:35.411 "tpoint_mask": "0xffffffffffffffff" 00:05:35.411 }, 00:05:35.411 "nvmf_rdma": { 00:05:35.411 "mask": "0x10", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "nvmf_tcp": { 00:05:35.411 "mask": "0x20", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "ftl": { 00:05:35.411 "mask": "0x40", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "blobfs": { 00:05:35.411 "mask": "0x80", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "dsa": { 00:05:35.411 "mask": "0x200", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "thread": { 00:05:35.411 "mask": "0x400", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "nvme_pcie": { 00:05:35.411 "mask": "0x800", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "iaa": { 00:05:35.411 "mask": "0x1000", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "nvme_tcp": { 00:05:35.411 "mask": "0x2000", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "bdev_nvme": { 00:05:35.411 "mask": "0x4000", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "sock": { 00:05:35.411 "mask": "0x8000", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "blob": { 00:05:35.411 "mask": "0x10000", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "bdev_raid": { 00:05:35.411 "mask": "0x20000", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 }, 00:05:35.411 "scheduler": { 00:05:35.411 "mask": "0x40000", 00:05:35.411 "tpoint_mask": "0x0" 00:05:35.411 } 00:05:35.411 }' 00:05:35.411 07:32:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:35.411 07:32:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:35.411 07:32:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:35.411 07:32:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:35.412 07:32:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:35.412 07:32:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:35.412 07:32:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:35.412 07:32:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:35.412 07:32:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:35.670 ************************************ 00:05:35.670 END TEST rpc_trace_cmd_test 00:05:35.670 ************************************ 00:05:35.670 07:32:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:35.670 00:05:35.670 real 0m0.259s 00:05:35.670 user 0m0.210s 00:05:35.670 sys 0m0.041s 00:05:35.670 07:32:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:35.670 07:32:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:35.670 07:32:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:35.670 07:32:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:35.670 07:32:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:35.670 07:32:53 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:35.670 07:32:53 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.670 07:32:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.670 ************************************ 00:05:35.670 START TEST rpc_daemon_integrity 00:05:35.670 ************************************ 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:35.670 { 00:05:35.670 "name": "Malloc2", 00:05:35.670 "aliases": [ 00:05:35.670 "08106353-b713-438d-8389-348eb11a500f" 00:05:35.670 ], 00:05:35.670 "product_name": "Malloc disk", 00:05:35.670 "block_size": 512, 00:05:35.670 "num_blocks": 16384, 00:05:35.670 "uuid": "08106353-b713-438d-8389-348eb11a500f", 00:05:35.670 "assigned_rate_limits": { 00:05:35.670 "rw_ios_per_sec": 0, 00:05:35.670 "rw_mbytes_per_sec": 0, 00:05:35.670 "r_mbytes_per_sec": 0, 00:05:35.670 "w_mbytes_per_sec": 0 00:05:35.670 }, 00:05:35.670 "claimed": false, 00:05:35.670 "zoned": false, 00:05:35.670 "supported_io_types": { 00:05:35.670 "read": true, 00:05:35.670 "write": true, 00:05:35.670 "unmap": true, 00:05:35.670 "flush": true, 00:05:35.670 "reset": true, 00:05:35.670 "nvme_admin": false, 00:05:35.670 "nvme_io": false, 00:05:35.670 "nvme_io_md": false, 00:05:35.670 "write_zeroes": true, 00:05:35.670 "zcopy": true, 00:05:35.670 "get_zone_info": false, 00:05:35.670 "zone_management": false, 00:05:35.670 "zone_append": false, 00:05:35.670 "compare": false, 00:05:35.670 "compare_and_write": false, 00:05:35.670 "abort": true, 00:05:35.670 "seek_hole": false, 00:05:35.670 "seek_data": false, 00:05:35.670 "copy": true, 00:05:35.670 "nvme_iov_md": false 00:05:35.670 }, 00:05:35.670 "memory_domains": [ 00:05:35.670 { 00:05:35.670 "dma_device_id": "system", 00:05:35.670 "dma_device_type": 1 00:05:35.670 }, 00:05:35.670 { 00:05:35.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.670 "dma_device_type": 2 00:05:35.670 } 00:05:35.670 ], 00:05:35.670 "driver_specific": {} 00:05:35.670 } 00:05:35.670 ]' 00:05:35.670 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:35.671 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:35.671 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:35.671 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.671 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.671 [2024-11-08 07:32:53.610425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:35.671 [2024-11-08 07:32:53.610481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.671 [2024-11-08 07:32:53.610500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17cc980 00:05:35.671 [2024-11-08 07:32:53.610510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.671 [2024-11-08 07:32:53.611983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.671 [2024-11-08 07:32:53.612174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:35.671 Passthru0 00:05:35.671 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.671 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:35.671 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.671 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.929 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.929 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:35.929 { 00:05:35.929 "name": "Malloc2", 00:05:35.929 "aliases": [ 00:05:35.929 "08106353-b713-438d-8389-348eb11a500f" 00:05:35.929 ], 00:05:35.929 "product_name": "Malloc disk", 00:05:35.929 "block_size": 512, 00:05:35.929 "num_blocks": 16384, 00:05:35.929 "uuid": "08106353-b713-438d-8389-348eb11a500f", 00:05:35.929 "assigned_rate_limits": { 00:05:35.929 "rw_ios_per_sec": 0, 00:05:35.929 "rw_mbytes_per_sec": 0, 00:05:35.929 "r_mbytes_per_sec": 0, 00:05:35.929 "w_mbytes_per_sec": 0 00:05:35.929 }, 00:05:35.929 "claimed": true, 00:05:35.929 "claim_type": "exclusive_write", 00:05:35.929 "zoned": false, 00:05:35.929 "supported_io_types": { 00:05:35.929 "read": true, 00:05:35.929 "write": true, 00:05:35.929 "unmap": true, 00:05:35.929 "flush": true, 00:05:35.929 "reset": true, 00:05:35.929 "nvme_admin": false, 00:05:35.929 "nvme_io": false, 00:05:35.929 "nvme_io_md": false, 00:05:35.929 "write_zeroes": true, 00:05:35.929 "zcopy": true, 00:05:35.929 "get_zone_info": false, 00:05:35.929 "zone_management": false, 00:05:35.929 "zone_append": false, 00:05:35.929 "compare": false, 00:05:35.929 "compare_and_write": false, 00:05:35.929 "abort": true, 00:05:35.929 "seek_hole": false, 00:05:35.929 "seek_data": false, 00:05:35.929 "copy": true, 00:05:35.929 "nvme_iov_md": false 00:05:35.929 }, 00:05:35.929 "memory_domains": [ 00:05:35.929 { 00:05:35.929 "dma_device_id": "system", 00:05:35.929 "dma_device_type": 1 00:05:35.929 }, 00:05:35.929 { 00:05:35.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.929 "dma_device_type": 2 00:05:35.929 } 00:05:35.929 ], 00:05:35.929 "driver_specific": {} 00:05:35.929 }, 00:05:35.929 { 00:05:35.929 "name": "Passthru0", 00:05:35.929 "aliases": [ 00:05:35.929 "129a7532-3d22-53d2-ab80-423ccc934105" 00:05:35.929 ], 00:05:35.929 "product_name": "passthru", 00:05:35.929 "block_size": 512, 00:05:35.929 "num_blocks": 16384, 00:05:35.929 "uuid": "129a7532-3d22-53d2-ab80-423ccc934105", 00:05:35.929 "assigned_rate_limits": { 00:05:35.929 "rw_ios_per_sec": 0, 00:05:35.929 "rw_mbytes_per_sec": 0, 00:05:35.929 "r_mbytes_per_sec": 0, 00:05:35.929 "w_mbytes_per_sec": 0 00:05:35.929 }, 00:05:35.929 "claimed": false, 00:05:35.929 "zoned": false, 00:05:35.929 "supported_io_types": { 00:05:35.929 "read": true, 00:05:35.929 "write": true, 00:05:35.929 "unmap": true, 00:05:35.929 "flush": true, 00:05:35.929 "reset": true, 00:05:35.929 "nvme_admin": false, 00:05:35.929 "nvme_io": false, 00:05:35.929 "nvme_io_md": false, 00:05:35.929 "write_zeroes": true, 00:05:35.929 "zcopy": true, 00:05:35.929 "get_zone_info": false, 00:05:35.929 "zone_management": false, 00:05:35.929 "zone_append": false, 00:05:35.929 "compare": false, 00:05:35.929 "compare_and_write": false, 00:05:35.929 "abort": true, 00:05:35.929 "seek_hole": false, 00:05:35.929 "seek_data": false, 00:05:35.929 "copy": true, 00:05:35.929 "nvme_iov_md": false 00:05:35.929 }, 00:05:35.929 "memory_domains": [ 00:05:35.929 { 00:05:35.929 "dma_device_id": "system", 00:05:35.929 "dma_device_type": 1 00:05:35.929 }, 00:05:35.929 { 00:05:35.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.929 "dma_device_type": 2 00:05:35.929 } 00:05:35.929 ], 00:05:35.929 "driver_specific": { 00:05:35.929 "passthru": { 00:05:35.929 "name": "Passthru0", 00:05:35.929 "base_bdev_name": "Malloc2" 00:05:35.929 } 00:05:35.929 } 00:05:35.929 } 00:05:35.929 ]' 00:05:35.929 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:35.929 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:35.929 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:35.929 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:35.930 ************************************ 00:05:35.930 END TEST rpc_daemon_integrity 00:05:35.930 ************************************ 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:35.930 00:05:35.930 real 0m0.346s 00:05:35.930 user 0m0.227s 00:05:35.930 sys 0m0.054s 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:35.930 07:32:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.930 07:32:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:35.930 07:32:53 rpc -- rpc/rpc.sh@84 -- # killprocess 56706 00:05:35.930 07:32:53 rpc -- common/autotest_common.sh@952 -- # '[' -z 56706 ']' 00:05:35.930 07:32:53 rpc -- common/autotest_common.sh@956 -- # kill -0 56706 00:05:35.930 07:32:53 rpc -- common/autotest_common.sh@957 -- # uname 00:05:35.930 07:32:53 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:35.930 07:32:53 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56706 00:05:35.930 killing process with pid 56706 00:05:35.930 07:32:53 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:35.930 07:32:53 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:35.930 07:32:53 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56706' 00:05:35.930 07:32:53 rpc -- common/autotest_common.sh@971 -- # kill 56706 00:05:35.930 07:32:53 rpc -- common/autotest_common.sh@976 -- # wait 56706 00:05:36.498 00:05:36.498 real 0m2.830s 00:05:36.498 user 0m3.587s 00:05:36.498 sys 0m0.753s 00:05:36.498 07:32:54 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.498 ************************************ 00:05:36.498 END TEST rpc 00:05:36.498 ************************************ 00:05:36.498 07:32:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.498 07:32:54 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:36.498 07:32:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:36.498 07:32:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.498 07:32:54 -- common/autotest_common.sh@10 -- # set +x 00:05:36.498 ************************************ 00:05:36.498 START TEST skip_rpc 00:05:36.498 ************************************ 00:05:36.498 07:32:54 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:36.498 * Looking for test storage... 00:05:36.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:36.498 07:32:54 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:36.498 07:32:54 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:36.498 07:32:54 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:36.498 07:32:54 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.498 07:32:54 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:36.498 07:32:54 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.498 07:32:54 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:36.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.498 --rc genhtml_branch_coverage=1 00:05:36.498 --rc genhtml_function_coverage=1 00:05:36.498 --rc genhtml_legend=1 00:05:36.498 --rc geninfo_all_blocks=1 00:05:36.499 --rc geninfo_unexecuted_blocks=1 00:05:36.499 00:05:36.499 ' 00:05:36.499 07:32:54 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:36.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.499 --rc genhtml_branch_coverage=1 00:05:36.499 --rc genhtml_function_coverage=1 00:05:36.499 --rc genhtml_legend=1 00:05:36.499 --rc geninfo_all_blocks=1 00:05:36.499 --rc geninfo_unexecuted_blocks=1 00:05:36.499 00:05:36.499 ' 00:05:36.499 07:32:54 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:36.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.499 --rc genhtml_branch_coverage=1 00:05:36.499 --rc genhtml_function_coverage=1 00:05:36.499 --rc genhtml_legend=1 00:05:36.499 --rc geninfo_all_blocks=1 00:05:36.499 --rc geninfo_unexecuted_blocks=1 00:05:36.499 00:05:36.499 ' 00:05:36.499 07:32:54 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:36.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.499 --rc genhtml_branch_coverage=1 00:05:36.499 --rc genhtml_function_coverage=1 00:05:36.499 --rc genhtml_legend=1 00:05:36.499 --rc geninfo_all_blocks=1 00:05:36.499 --rc geninfo_unexecuted_blocks=1 00:05:36.499 00:05:36.499 ' 00:05:36.499 07:32:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:36.499 07:32:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:36.499 07:32:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:36.499 07:32:54 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:36.499 07:32:54 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.499 07:32:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.499 ************************************ 00:05:36.499 START TEST skip_rpc 00:05:36.499 ************************************ 00:05:36.499 07:32:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:36.757 07:32:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56912 00:05:36.757 07:32:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.757 07:32:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:36.757 07:32:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:36.757 [2024-11-08 07:32:54.526297] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:05:36.757 [2024-11-08 07:32:54.526584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56912 ] 00:05:36.757 [2024-11-08 07:32:54.676071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.016 [2024-11-08 07:32:54.722042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.016 [2024-11-08 07:32:54.780603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56912 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56912 ']' 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56912 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:42.287 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56912 00:05:42.287 killing process with pid 56912 00:05:42.288 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:42.288 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:42.288 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56912' 00:05:42.288 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56912 00:05:42.288 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56912 00:05:42.288 00:05:42.288 real 0m5.391s 00:05:42.288 user 0m5.039s 00:05:42.288 sys 0m0.268s 00:05:42.288 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:42.288 07:32:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.288 ************************************ 00:05:42.288 END TEST skip_rpc 00:05:42.288 ************************************ 00:05:42.288 07:32:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:42.288 07:32:59 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:42.288 07:32:59 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:42.288 07:32:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.288 ************************************ 00:05:42.288 START TEST skip_rpc_with_json 00:05:42.288 ************************************ 00:05:42.288 07:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:42.288 07:32:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:42.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.288 07:32:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56993 00:05:42.288 07:32:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.288 07:32:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56993 00:05:42.288 07:32:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.288 07:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 56993 ']' 00:05:42.288 07:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.288 07:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:42.288 07:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.288 07:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:42.288 07:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.288 [2024-11-08 07:32:59.965175] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:05:42.288 [2024-11-08 07:32:59.965434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56993 ] 00:05:42.288 [2024-11-08 07:33:00.106824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.288 [2024-11-08 07:33:00.166652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.288 [2024-11-08 07:33:00.228883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.545 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:42.545 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:42.545 07:33:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:42.545 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.545 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.545 [2024-11-08 07:33:00.404545] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:42.545 request: 00:05:42.545 { 00:05:42.545 "trtype": "tcp", 00:05:42.545 "method": "nvmf_get_transports", 00:05:42.545 "req_id": 1 00:05:42.545 } 00:05:42.545 Got JSON-RPC error response 00:05:42.545 response: 00:05:42.545 { 00:05:42.545 "code": -19, 00:05:42.545 "message": "No such device" 00:05:42.545 } 00:05:42.545 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:42.545 07:33:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:42.545 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.545 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.545 [2024-11-08 07:33:00.416654] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.545 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.545 07:33:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:42.546 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.546 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.803 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.803 07:33:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:42.803 { 00:05:42.803 "subsystems": [ 00:05:42.803 { 00:05:42.803 "subsystem": "fsdev", 00:05:42.803 "config": [ 00:05:42.803 { 00:05:42.803 "method": "fsdev_set_opts", 00:05:42.803 "params": { 00:05:42.803 "fsdev_io_pool_size": 65535, 00:05:42.803 "fsdev_io_cache_size": 256 00:05:42.803 } 00:05:42.803 } 00:05:42.803 ] 00:05:42.803 }, 00:05:42.803 { 00:05:42.803 "subsystem": "keyring", 00:05:42.803 "config": [] 00:05:42.803 }, 00:05:42.803 { 00:05:42.803 "subsystem": "iobuf", 00:05:42.803 "config": [ 00:05:42.803 { 00:05:42.803 "method": "iobuf_set_options", 00:05:42.803 "params": { 00:05:42.803 "small_pool_count": 8192, 00:05:42.803 "large_pool_count": 1024, 00:05:42.803 "small_bufsize": 8192, 00:05:42.803 "large_bufsize": 135168, 00:05:42.803 "enable_numa": false 00:05:42.803 } 00:05:42.803 } 00:05:42.803 ] 00:05:42.803 }, 00:05:42.803 { 00:05:42.803 "subsystem": "sock", 00:05:42.803 "config": [ 00:05:42.803 { 00:05:42.803 "method": "sock_set_default_impl", 00:05:42.803 "params": { 00:05:42.803 "impl_name": "uring" 00:05:42.803 } 00:05:42.803 }, 00:05:42.803 { 00:05:42.803 "method": "sock_impl_set_options", 00:05:42.803 "params": { 00:05:42.803 "impl_name": "ssl", 00:05:42.803 "recv_buf_size": 4096, 00:05:42.803 "send_buf_size": 4096, 00:05:42.803 "enable_recv_pipe": true, 00:05:42.803 "enable_quickack": false, 00:05:42.803 "enable_placement_id": 0, 00:05:42.803 "enable_zerocopy_send_server": true, 00:05:42.803 "enable_zerocopy_send_client": false, 00:05:42.803 "zerocopy_threshold": 0, 00:05:42.803 "tls_version": 0, 00:05:42.803 "enable_ktls": false 00:05:42.803 } 00:05:42.803 }, 00:05:42.803 { 00:05:42.803 "method": "sock_impl_set_options", 00:05:42.803 "params": { 00:05:42.803 "impl_name": "posix", 00:05:42.803 "recv_buf_size": 2097152, 00:05:42.803 "send_buf_size": 2097152, 00:05:42.803 "enable_recv_pipe": true, 00:05:42.803 "enable_quickack": false, 00:05:42.803 "enable_placement_id": 0, 00:05:42.803 "enable_zerocopy_send_server": true, 00:05:42.803 "enable_zerocopy_send_client": false, 00:05:42.803 "zerocopy_threshold": 0, 00:05:42.803 "tls_version": 0, 00:05:42.804 "enable_ktls": false 00:05:42.804 } 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "method": "sock_impl_set_options", 00:05:42.804 "params": { 00:05:42.804 "impl_name": "uring", 00:05:42.804 "recv_buf_size": 2097152, 00:05:42.804 "send_buf_size": 2097152, 00:05:42.804 "enable_recv_pipe": true, 00:05:42.804 "enable_quickack": false, 00:05:42.804 "enable_placement_id": 0, 00:05:42.804 "enable_zerocopy_send_server": false, 00:05:42.804 "enable_zerocopy_send_client": false, 00:05:42.804 "zerocopy_threshold": 0, 00:05:42.804 "tls_version": 0, 00:05:42.804 "enable_ktls": false 00:05:42.804 } 00:05:42.804 } 00:05:42.804 ] 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "subsystem": "vmd", 00:05:42.804 "config": [] 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "subsystem": "accel", 00:05:42.804 "config": [ 00:05:42.804 { 00:05:42.804 "method": "accel_set_options", 00:05:42.804 "params": { 00:05:42.804 "small_cache_size": 128, 00:05:42.804 "large_cache_size": 16, 00:05:42.804 "task_count": 2048, 00:05:42.804 "sequence_count": 2048, 00:05:42.804 "buf_count": 2048 00:05:42.804 } 00:05:42.804 } 00:05:42.804 ] 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "subsystem": "bdev", 00:05:42.804 "config": [ 00:05:42.804 { 00:05:42.804 "method": "bdev_set_options", 00:05:42.804 "params": { 00:05:42.804 "bdev_io_pool_size": 65535, 00:05:42.804 "bdev_io_cache_size": 256, 00:05:42.804 "bdev_auto_examine": true, 00:05:42.804 "iobuf_small_cache_size": 128, 00:05:42.804 "iobuf_large_cache_size": 16 00:05:42.804 } 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "method": "bdev_raid_set_options", 00:05:42.804 "params": { 00:05:42.804 "process_window_size_kb": 1024, 00:05:42.804 "process_max_bandwidth_mb_sec": 0 00:05:42.804 } 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "method": "bdev_iscsi_set_options", 00:05:42.804 "params": { 00:05:42.804 "timeout_sec": 30 00:05:42.804 } 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "method": "bdev_nvme_set_options", 00:05:42.804 "params": { 00:05:42.804 "action_on_timeout": "none", 00:05:42.804 "timeout_us": 0, 00:05:42.804 "timeout_admin_us": 0, 00:05:42.804 "keep_alive_timeout_ms": 10000, 00:05:42.804 "arbitration_burst": 0, 00:05:42.804 "low_priority_weight": 0, 00:05:42.804 "medium_priority_weight": 0, 00:05:42.804 "high_priority_weight": 0, 00:05:42.804 "nvme_adminq_poll_period_us": 10000, 00:05:42.804 "nvme_ioq_poll_period_us": 0, 00:05:42.804 "io_queue_requests": 0, 00:05:42.804 "delay_cmd_submit": true, 00:05:42.804 "transport_retry_count": 4, 00:05:42.804 "bdev_retry_count": 3, 00:05:42.804 "transport_ack_timeout": 0, 00:05:42.804 "ctrlr_loss_timeout_sec": 0, 00:05:42.804 "reconnect_delay_sec": 0, 00:05:42.804 "fast_io_fail_timeout_sec": 0, 00:05:42.804 "disable_auto_failback": false, 00:05:42.804 "generate_uuids": false, 00:05:42.804 "transport_tos": 0, 00:05:42.804 "nvme_error_stat": false, 00:05:42.804 "rdma_srq_size": 0, 00:05:42.804 "io_path_stat": false, 00:05:42.804 "allow_accel_sequence": false, 00:05:42.804 "rdma_max_cq_size": 0, 00:05:42.804 "rdma_cm_event_timeout_ms": 0, 00:05:42.804 "dhchap_digests": [ 00:05:42.804 "sha256", 00:05:42.804 "sha384", 00:05:42.804 "sha512" 00:05:42.804 ], 00:05:42.804 "dhchap_dhgroups": [ 00:05:42.804 "null", 00:05:42.804 "ffdhe2048", 00:05:42.804 "ffdhe3072", 00:05:42.804 "ffdhe4096", 00:05:42.804 "ffdhe6144", 00:05:42.804 "ffdhe8192" 00:05:42.804 ] 00:05:42.804 } 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "method": "bdev_nvme_set_hotplug", 00:05:42.804 "params": { 00:05:42.804 "period_us": 100000, 00:05:42.804 "enable": false 00:05:42.804 } 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "method": "bdev_wait_for_examine" 00:05:42.804 } 00:05:42.804 ] 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "subsystem": "scsi", 00:05:42.804 "config": null 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "subsystem": "scheduler", 00:05:42.804 "config": [ 00:05:42.804 { 00:05:42.804 "method": "framework_set_scheduler", 00:05:42.804 "params": { 00:05:42.804 "name": "static" 00:05:42.804 } 00:05:42.804 } 00:05:42.804 ] 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "subsystem": "vhost_scsi", 00:05:42.804 "config": [] 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "subsystem": "vhost_blk", 00:05:42.804 "config": [] 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "subsystem": "ublk", 00:05:42.804 "config": [] 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "subsystem": "nbd", 00:05:42.804 "config": [] 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "subsystem": "nvmf", 00:05:42.804 "config": [ 00:05:42.804 { 00:05:42.804 "method": "nvmf_set_config", 00:05:42.804 "params": { 00:05:42.804 "discovery_filter": "match_any", 00:05:42.804 "admin_cmd_passthru": { 00:05:42.804 "identify_ctrlr": false 00:05:42.804 }, 00:05:42.804 "dhchap_digests": [ 00:05:42.804 "sha256", 00:05:42.804 "sha384", 00:05:42.804 "sha512" 00:05:42.804 ], 00:05:42.804 "dhchap_dhgroups": [ 00:05:42.804 "null", 00:05:42.804 "ffdhe2048", 00:05:42.804 "ffdhe3072", 00:05:42.804 "ffdhe4096", 00:05:42.804 "ffdhe6144", 00:05:42.804 "ffdhe8192" 00:05:42.804 ] 00:05:42.804 } 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "method": "nvmf_set_max_subsystems", 00:05:42.804 "params": { 00:05:42.804 "max_subsystems": 1024 00:05:42.804 } 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "method": "nvmf_set_crdt", 00:05:42.804 "params": { 00:05:42.804 "crdt1": 0, 00:05:42.804 "crdt2": 0, 00:05:42.804 "crdt3": 0 00:05:42.804 } 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "method": "nvmf_create_transport", 00:05:42.804 "params": { 00:05:42.804 "trtype": "TCP", 00:05:42.804 "max_queue_depth": 128, 00:05:42.804 "max_io_qpairs_per_ctrlr": 127, 00:05:42.804 "in_capsule_data_size": 4096, 00:05:42.804 "max_io_size": 131072, 00:05:42.804 "io_unit_size": 131072, 00:05:42.804 "max_aq_depth": 128, 00:05:42.804 "num_shared_buffers": 511, 00:05:42.804 "buf_cache_size": 4294967295, 00:05:42.804 "dif_insert_or_strip": false, 00:05:42.804 "zcopy": false, 00:05:42.804 "c2h_success": true, 00:05:42.804 "sock_priority": 0, 00:05:42.804 "abort_timeout_sec": 1, 00:05:42.804 "ack_timeout": 0, 00:05:42.804 "data_wr_pool_size": 0 00:05:42.804 } 00:05:42.804 } 00:05:42.804 ] 00:05:42.804 }, 00:05:42.804 { 00:05:42.804 "subsystem": "iscsi", 00:05:42.804 "config": [ 00:05:42.804 { 00:05:42.804 "method": "iscsi_set_options", 00:05:42.804 "params": { 00:05:42.804 "node_base": "iqn.2016-06.io.spdk", 00:05:42.804 "max_sessions": 128, 00:05:42.804 "max_connections_per_session": 2, 00:05:42.804 "max_queue_depth": 64, 00:05:42.804 "default_time2wait": 2, 00:05:42.804 "default_time2retain": 20, 00:05:42.804 "first_burst_length": 8192, 00:05:42.804 "immediate_data": true, 00:05:42.804 "allow_duplicated_isid": false, 00:05:42.804 "error_recovery_level": 0, 00:05:42.804 "nop_timeout": 60, 00:05:42.804 "nop_in_interval": 30, 00:05:42.804 "disable_chap": false, 00:05:42.804 "require_chap": false, 00:05:42.804 "mutual_chap": false, 00:05:42.804 "chap_group": 0, 00:05:42.804 "max_large_datain_per_connection": 64, 00:05:42.804 "max_r2t_per_connection": 4, 00:05:42.804 "pdu_pool_size": 36864, 00:05:42.804 "immediate_data_pool_size": 16384, 00:05:42.804 "data_out_pool_size": 2048 00:05:42.804 } 00:05:42.804 } 00:05:42.804 ] 00:05:42.804 } 00:05:42.804 ] 00:05:42.804 } 00:05:42.804 07:33:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:42.804 07:33:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56993 00:05:42.804 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 56993 ']' 00:05:42.804 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 56993 00:05:42.804 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:42.804 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:42.804 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56993 00:05:42.804 killing process with pid 56993 00:05:42.804 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:42.804 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:42.804 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56993' 00:05:42.804 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 56993 00:05:42.804 07:33:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 56993 00:05:43.062 07:33:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57013 00:05:43.062 07:33:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:43.062 07:33:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:48.330 07:33:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57013 00:05:48.330 07:33:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57013 ']' 00:05:48.330 07:33:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57013 00:05:48.330 07:33:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:48.330 07:33:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:48.330 07:33:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57013 00:05:48.330 killing process with pid 57013 00:05:48.330 07:33:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:48.330 07:33:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:48.330 07:33:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57013' 00:05:48.330 07:33:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57013 00:05:48.330 07:33:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57013 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.589 00:05:48.589 real 0m6.433s 00:05:48.589 user 0m6.037s 00:05:48.589 sys 0m0.605s 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.589 ************************************ 00:05:48.589 END TEST skip_rpc_with_json 00:05:48.589 ************************************ 00:05:48.589 07:33:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:48.589 07:33:06 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.589 07:33:06 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.589 07:33:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.589 ************************************ 00:05:48.589 START TEST skip_rpc_with_delay 00:05:48.589 ************************************ 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.589 [2024-11-08 07:33:06.476514] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.589 00:05:48.589 real 0m0.095s 00:05:48.589 user 0m0.052s 00:05:48.589 sys 0m0.041s 00:05:48.589 ************************************ 00:05:48.589 END TEST skip_rpc_with_delay 00:05:48.589 ************************************ 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:48.589 07:33:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:48.848 07:33:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:48.848 07:33:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:48.848 07:33:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:48.848 07:33:06 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.848 07:33:06 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.848 07:33:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.848 ************************************ 00:05:48.848 START TEST exit_on_failed_rpc_init 00:05:48.848 ************************************ 00:05:48.848 07:33:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:48.848 07:33:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57123 00:05:48.848 07:33:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57123 00:05:48.848 07:33:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.848 07:33:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57123 ']' 00:05:48.848 07:33:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.849 07:33:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:48.849 07:33:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.849 07:33:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:48.849 07:33:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.849 [2024-11-08 07:33:06.639793] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:05:48.849 [2024-11-08 07:33:06.640153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57123 ] 00:05:48.849 [2024-11-08 07:33:06.785327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.107 [2024-11-08 07:33:06.839007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.107 [2024-11-08 07:33:06.898757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:49.675 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.675 [2024-11-08 07:33:07.628506] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:05:49.675 [2024-11-08 07:33:07.628612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57141 ] 00:05:49.933 [2024-11-08 07:33:07.787129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.933 [2024-11-08 07:33:07.850371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.933 [2024-11-08 07:33:07.850707] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:49.933 [2024-11-08 07:33:07.850735] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:49.933 [2024-11-08 07:33:07.850748] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57123 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57123 ']' 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57123 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57123 00:05:50.192 killing process with pid 57123 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57123' 00:05:50.192 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57123 00:05:50.193 07:33:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57123 00:05:50.452 00:05:50.452 real 0m1.705s 00:05:50.452 user 0m1.969s 00:05:50.452 sys 0m0.390s 00:05:50.452 07:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.452 ************************************ 00:05:50.452 END TEST exit_on_failed_rpc_init 00:05:50.452 ************************************ 00:05:50.452 07:33:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:50.452 07:33:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:50.452 00:05:50.452 real 0m14.084s 00:05:50.452 user 0m13.296s 00:05:50.452 sys 0m1.560s 00:05:50.452 07:33:08 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.452 ************************************ 00:05:50.452 END TEST skip_rpc 00:05:50.452 ************************************ 00:05:50.452 07:33:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.452 07:33:08 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:50.452 07:33:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:50.452 07:33:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.452 07:33:08 -- common/autotest_common.sh@10 -- # set +x 00:05:50.452 ************************************ 00:05:50.452 START TEST rpc_client 00:05:50.452 ************************************ 00:05:50.452 07:33:08 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:50.711 * Looking for test storage... 00:05:50.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:50.711 07:33:08 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:50.711 07:33:08 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.711 07:33:08 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:50.711 07:33:08 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.711 07:33:08 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.711 07:33:08 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.711 07:33:08 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.711 07:33:08 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.711 07:33:08 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.711 07:33:08 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.711 07:33:08 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.711 07:33:08 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.711 07:33:08 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.711 07:33:08 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.712 07:33:08 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:50.712 07:33:08 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.712 07:33:08 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.712 --rc genhtml_branch_coverage=1 00:05:50.712 --rc genhtml_function_coverage=1 00:05:50.712 --rc genhtml_legend=1 00:05:50.712 --rc geninfo_all_blocks=1 00:05:50.712 --rc geninfo_unexecuted_blocks=1 00:05:50.712 00:05:50.712 ' 00:05:50.712 07:33:08 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.712 --rc genhtml_branch_coverage=1 00:05:50.712 --rc genhtml_function_coverage=1 00:05:50.712 --rc genhtml_legend=1 00:05:50.712 --rc geninfo_all_blocks=1 00:05:50.712 --rc geninfo_unexecuted_blocks=1 00:05:50.712 00:05:50.712 ' 00:05:50.712 07:33:08 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.712 --rc genhtml_branch_coverage=1 00:05:50.712 --rc genhtml_function_coverage=1 00:05:50.712 --rc genhtml_legend=1 00:05:50.712 --rc geninfo_all_blocks=1 00:05:50.712 --rc geninfo_unexecuted_blocks=1 00:05:50.712 00:05:50.712 ' 00:05:50.712 07:33:08 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.712 --rc genhtml_branch_coverage=1 00:05:50.712 --rc genhtml_function_coverage=1 00:05:50.712 --rc genhtml_legend=1 00:05:50.712 --rc geninfo_all_blocks=1 00:05:50.712 --rc geninfo_unexecuted_blocks=1 00:05:50.712 00:05:50.712 ' 00:05:50.712 07:33:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:50.712 OK 00:05:50.712 07:33:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:50.712 00:05:50.712 real 0m0.223s 00:05:50.712 user 0m0.117s 00:05:50.712 sys 0m0.118s 00:05:50.712 07:33:08 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.712 07:33:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:50.712 ************************************ 00:05:50.712 END TEST rpc_client 00:05:50.712 ************************************ 00:05:50.712 07:33:08 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:50.712 07:33:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:50.712 07:33:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.712 07:33:08 -- common/autotest_common.sh@10 -- # set +x 00:05:50.712 ************************************ 00:05:50.712 START TEST json_config 00:05:50.712 ************************************ 00:05:50.712 07:33:08 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:50.972 07:33:08 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:50.972 07:33:08 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:50.972 07:33:08 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.972 07:33:08 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.972 07:33:08 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.972 07:33:08 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.972 07:33:08 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.972 07:33:08 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.972 07:33:08 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.972 07:33:08 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.972 07:33:08 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.972 07:33:08 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.972 07:33:08 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.972 07:33:08 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.972 07:33:08 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.972 07:33:08 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:50.972 07:33:08 json_config -- scripts/common.sh@345 -- # : 1 00:05:50.972 07:33:08 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.972 07:33:08 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.972 07:33:08 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:50.972 07:33:08 json_config -- scripts/common.sh@353 -- # local d=1 00:05:50.972 07:33:08 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.972 07:33:08 json_config -- scripts/common.sh@355 -- # echo 1 00:05:50.972 07:33:08 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.972 07:33:08 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:50.972 07:33:08 json_config -- scripts/common.sh@353 -- # local d=2 00:05:50.972 07:33:08 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.972 07:33:08 json_config -- scripts/common.sh@355 -- # echo 2 00:05:50.972 07:33:08 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.972 07:33:08 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.972 07:33:08 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.972 07:33:08 json_config -- scripts/common.sh@368 -- # return 0 00:05:50.972 07:33:08 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.972 07:33:08 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.972 --rc genhtml_branch_coverage=1 00:05:50.972 --rc genhtml_function_coverage=1 00:05:50.972 --rc genhtml_legend=1 00:05:50.972 --rc geninfo_all_blocks=1 00:05:50.972 --rc geninfo_unexecuted_blocks=1 00:05:50.972 00:05:50.972 ' 00:05:50.972 07:33:08 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.972 --rc genhtml_branch_coverage=1 00:05:50.972 --rc genhtml_function_coverage=1 00:05:50.972 --rc genhtml_legend=1 00:05:50.972 --rc geninfo_all_blocks=1 00:05:50.972 --rc geninfo_unexecuted_blocks=1 00:05:50.972 00:05:50.972 ' 00:05:50.972 07:33:08 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.972 --rc genhtml_branch_coverage=1 00:05:50.972 --rc genhtml_function_coverage=1 00:05:50.972 --rc genhtml_legend=1 00:05:50.972 --rc geninfo_all_blocks=1 00:05:50.972 --rc geninfo_unexecuted_blocks=1 00:05:50.972 00:05:50.972 ' 00:05:50.972 07:33:08 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.972 --rc genhtml_branch_coverage=1 00:05:50.972 --rc genhtml_function_coverage=1 00:05:50.972 --rc genhtml_legend=1 00:05:50.972 --rc geninfo_all_blocks=1 00:05:50.972 --rc geninfo_unexecuted_blocks=1 00:05:50.972 00:05:50.972 ' 00:05:50.972 07:33:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:50.972 07:33:08 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.972 07:33:08 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.972 07:33:08 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.972 07:33:08 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.972 07:33:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.972 07:33:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.972 07:33:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.972 07:33:08 json_config -- paths/export.sh@5 -- # export PATH 00:05:50.972 07:33:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@51 -- # : 0 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.972 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.972 07:33:08 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.972 07:33:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:50.972 07:33:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:50.972 07:33:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:50.972 07:33:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:50.973 INFO: JSON configuration test init 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:50.973 07:33:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:50.973 07:33:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:50.973 07:33:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:50.973 07:33:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.973 Waiting for target to run... 00:05:50.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.973 07:33:08 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:50.973 07:33:08 json_config -- json_config/common.sh@9 -- # local app=target 00:05:50.973 07:33:08 json_config -- json_config/common.sh@10 -- # shift 00:05:50.973 07:33:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.973 07:33:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.973 07:33:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.973 07:33:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.973 07:33:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.973 07:33:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57280 00:05:50.973 07:33:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.973 07:33:08 json_config -- json_config/common.sh@25 -- # waitforlisten 57280 /var/tmp/spdk_tgt.sock 00:05:50.973 07:33:08 json_config -- common/autotest_common.sh@833 -- # '[' -z 57280 ']' 00:05:50.973 07:33:08 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.973 07:33:08 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:50.973 07:33:08 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.973 07:33:08 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:50.973 07:33:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:50.973 07:33:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.231 [2024-11-08 07:33:08.969435] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:05:51.231 [2024-11-08 07:33:08.969731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57280 ] 00:05:51.489 [2024-11-08 07:33:09.349339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.489 [2024-11-08 07:33:09.393254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.057 07:33:09 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.057 07:33:09 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:52.057 07:33:09 json_config -- json_config/common.sh@26 -- # echo '' 00:05:52.057 00:05:52.057 07:33:09 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:52.057 07:33:09 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:52.057 07:33:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.057 07:33:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.057 07:33:09 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:52.057 07:33:09 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:52.057 07:33:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.057 07:33:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.057 07:33:09 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:52.057 07:33:09 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:52.057 07:33:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:52.315 [2024-11-08 07:33:10.235108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.574 07:33:10 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:52.574 07:33:10 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:52.574 07:33:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.574 07:33:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.574 07:33:10 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:52.574 07:33:10 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:52.574 07:33:10 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:52.574 07:33:10 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:52.574 07:33:10 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:52.574 07:33:10 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:52.574 07:33:10 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:52.574 07:33:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@54 -- # sort 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:52.832 07:33:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.832 07:33:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:52.832 07:33:10 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:52.832 07:33:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.832 07:33:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.833 07:33:10 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:52.833 07:33:10 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:52.833 07:33:10 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:52.833 07:33:10 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:52.833 07:33:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:53.091 MallocForNvmf0 00:05:53.091 07:33:11 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:53.091 07:33:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:53.676 MallocForNvmf1 00:05:53.676 07:33:11 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:53.677 07:33:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:53.935 [2024-11-08 07:33:11.637847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.935 07:33:11 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:53.935 07:33:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:54.194 07:33:11 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:54.194 07:33:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:54.194 07:33:12 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:54.194 07:33:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:54.453 07:33:12 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:54.453 07:33:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:54.711 [2024-11-08 07:33:12.602438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:54.711 07:33:12 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:54.711 07:33:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:54.711 07:33:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.711 07:33:12 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:54.711 07:33:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:54.711 07:33:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.970 07:33:12 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:54.970 07:33:12 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:54.970 07:33:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:55.234 MallocBdevForConfigChangeCheck 00:05:55.234 07:33:13 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:55.234 07:33:13 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:55.234 07:33:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.234 07:33:13 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:55.234 07:33:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.493 INFO: shutting down applications... 00:05:55.493 07:33:13 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:55.493 07:33:13 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:55.493 07:33:13 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:55.493 07:33:13 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:55.493 07:33:13 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:56.060 Calling clear_iscsi_subsystem 00:05:56.060 Calling clear_nvmf_subsystem 00:05:56.060 Calling clear_nbd_subsystem 00:05:56.060 Calling clear_ublk_subsystem 00:05:56.060 Calling clear_vhost_blk_subsystem 00:05:56.060 Calling clear_vhost_scsi_subsystem 00:05:56.060 Calling clear_bdev_subsystem 00:05:56.060 07:33:13 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:56.060 07:33:13 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:56.060 07:33:13 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:56.060 07:33:13 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.060 07:33:13 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:56.060 07:33:13 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:56.318 07:33:14 json_config -- json_config/json_config.sh@352 -- # break 00:05:56.318 07:33:14 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:56.318 07:33:14 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:56.318 07:33:14 json_config -- json_config/common.sh@31 -- # local app=target 00:05:56.318 07:33:14 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:56.318 07:33:14 json_config -- json_config/common.sh@35 -- # [[ -n 57280 ]] 00:05:56.318 07:33:14 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57280 00:05:56.318 07:33:14 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:56.318 07:33:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.318 07:33:14 json_config -- json_config/common.sh@41 -- # kill -0 57280 00:05:56.318 07:33:14 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:56.885 SPDK target shutdown done 00:05:56.885 07:33:14 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:56.885 07:33:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.885 07:33:14 json_config -- json_config/common.sh@41 -- # kill -0 57280 00:05:56.885 07:33:14 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:56.885 07:33:14 json_config -- json_config/common.sh@43 -- # break 00:05:56.885 07:33:14 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:56.885 07:33:14 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:56.885 07:33:14 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:56.885 INFO: relaunching applications... 00:05:56.885 07:33:14 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.885 07:33:14 json_config -- json_config/common.sh@9 -- # local app=target 00:05:56.885 07:33:14 json_config -- json_config/common.sh@10 -- # shift 00:05:56.885 07:33:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:56.885 07:33:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:56.885 07:33:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:56.885 07:33:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.885 07:33:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.885 07:33:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57477 00:05:56.885 07:33:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:56.885 07:33:14 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.885 Waiting for target to run... 00:05:56.885 07:33:14 json_config -- json_config/common.sh@25 -- # waitforlisten 57477 /var/tmp/spdk_tgt.sock 00:05:56.885 07:33:14 json_config -- common/autotest_common.sh@833 -- # '[' -z 57477 ']' 00:05:56.885 07:33:14 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.885 07:33:14 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:56.885 07:33:14 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.885 07:33:14 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:56.885 07:33:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.885 [2024-11-08 07:33:14.755582] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:05:56.885 [2024-11-08 07:33:14.755942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57477 ] 00:05:57.450 [2024-11-08 07:33:15.141126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.450 [2024-11-08 07:33:15.185653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.450 [2024-11-08 07:33:15.322142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.708 [2024-11-08 07:33:15.537632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.708 [2024-11-08 07:33:15.569740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:57.708 00:05:57.708 INFO: Checking if target configuration is the same... 00:05:57.708 07:33:15 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:57.708 07:33:15 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:57.708 07:33:15 json_config -- json_config/common.sh@26 -- # echo '' 00:05:57.708 07:33:15 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:57.708 07:33:15 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:57.708 07:33:15 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:57.708 07:33:15 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:57.708 07:33:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:57.708 + '[' 2 -ne 2 ']' 00:05:57.708 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:57.708 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:57.708 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:57.708 +++ basename /dev/fd/62 00:05:57.965 ++ mktemp /tmp/62.XXX 00:05:57.965 + tmp_file_1=/tmp/62.2VQ 00:05:57.965 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:57.965 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:57.965 + tmp_file_2=/tmp/spdk_tgt_config.json.SCF 00:05:57.965 + ret=0 00:05:57.965 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:58.222 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:58.222 + diff -u /tmp/62.2VQ /tmp/spdk_tgt_config.json.SCF 00:05:58.222 INFO: JSON config files are the same 00:05:58.222 + echo 'INFO: JSON config files are the same' 00:05:58.222 + rm /tmp/62.2VQ /tmp/spdk_tgt_config.json.SCF 00:05:58.222 + exit 0 00:05:58.222 INFO: changing configuration and checking if this can be detected... 00:05:58.222 07:33:16 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:58.222 07:33:16 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:58.222 07:33:16 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:58.223 07:33:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:58.481 07:33:16 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:58.481 07:33:16 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:58.481 07:33:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:58.481 + '[' 2 -ne 2 ']' 00:05:58.481 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:58.481 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:58.481 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:58.481 +++ basename /dev/fd/62 00:05:58.481 ++ mktemp /tmp/62.XXX 00:05:58.481 + tmp_file_1=/tmp/62.kLs 00:05:58.481 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:58.481 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:58.481 + tmp_file_2=/tmp/spdk_tgt_config.json.GpT 00:05:58.481 + ret=0 00:05:58.481 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:59.045 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:59.045 + diff -u /tmp/62.kLs /tmp/spdk_tgt_config.json.GpT 00:05:59.045 + ret=1 00:05:59.045 + echo '=== Start of file: /tmp/62.kLs ===' 00:05:59.045 + cat /tmp/62.kLs 00:05:59.045 + echo '=== End of file: /tmp/62.kLs ===' 00:05:59.045 + echo '' 00:05:59.045 + echo '=== Start of file: /tmp/spdk_tgt_config.json.GpT ===' 00:05:59.045 + cat /tmp/spdk_tgt_config.json.GpT 00:05:59.045 + echo '=== End of file: /tmp/spdk_tgt_config.json.GpT ===' 00:05:59.045 + echo '' 00:05:59.045 + rm /tmp/62.kLs /tmp/spdk_tgt_config.json.GpT 00:05:59.045 + exit 1 00:05:59.045 INFO: configuration change detected. 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@324 -- # [[ -n 57477 ]] 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.045 07:33:16 json_config -- json_config/json_config.sh@330 -- # killprocess 57477 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@952 -- # '[' -z 57477 ']' 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@956 -- # kill -0 57477 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@957 -- # uname 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57477 00:05:59.045 killing process with pid 57477 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57477' 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@971 -- # kill 57477 00:05:59.045 07:33:16 json_config -- common/autotest_common.sh@976 -- # wait 57477 00:05:59.309 07:33:17 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:59.309 07:33:17 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:59.309 07:33:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:59.309 07:33:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.309 INFO: Success 00:05:59.309 07:33:17 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:59.309 07:33:17 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:59.309 00:05:59.309 real 0m8.529s 00:05:59.309 user 0m12.076s 00:05:59.309 sys 0m1.840s 00:05:59.309 07:33:17 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.309 07:33:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.309 ************************************ 00:05:59.309 END TEST json_config 00:05:59.309 ************************************ 00:05:59.309 07:33:17 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:59.309 07:33:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:59.309 07:33:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:59.309 07:33:17 -- common/autotest_common.sh@10 -- # set +x 00:05:59.309 ************************************ 00:05:59.309 START TEST json_config_extra_key 00:05:59.309 ************************************ 00:05:59.309 07:33:17 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:59.577 07:33:17 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:59.577 07:33:17 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:59.577 07:33:17 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:59.577 07:33:17 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.577 07:33:17 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:59.577 07:33:17 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.577 07:33:17 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:59.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.577 --rc genhtml_branch_coverage=1 00:05:59.577 --rc genhtml_function_coverage=1 00:05:59.577 --rc genhtml_legend=1 00:05:59.577 --rc geninfo_all_blocks=1 00:05:59.577 --rc geninfo_unexecuted_blocks=1 00:05:59.577 00:05:59.577 ' 00:05:59.577 07:33:17 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:59.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.577 --rc genhtml_branch_coverage=1 00:05:59.577 --rc genhtml_function_coverage=1 00:05:59.577 --rc genhtml_legend=1 00:05:59.577 --rc geninfo_all_blocks=1 00:05:59.577 --rc geninfo_unexecuted_blocks=1 00:05:59.577 00:05:59.577 ' 00:05:59.577 07:33:17 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:59.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.577 --rc genhtml_branch_coverage=1 00:05:59.577 --rc genhtml_function_coverage=1 00:05:59.577 --rc genhtml_legend=1 00:05:59.577 --rc geninfo_all_blocks=1 00:05:59.577 --rc geninfo_unexecuted_blocks=1 00:05:59.577 00:05:59.577 ' 00:05:59.577 07:33:17 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:59.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.577 --rc genhtml_branch_coverage=1 00:05:59.577 --rc genhtml_function_coverage=1 00:05:59.577 --rc genhtml_legend=1 00:05:59.577 --rc geninfo_all_blocks=1 00:05:59.577 --rc geninfo_unexecuted_blocks=1 00:05:59.577 00:05:59.577 ' 00:05:59.577 07:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.577 07:33:17 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.578 07:33:17 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:59.578 07:33:17 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.578 07:33:17 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:59.578 07:33:17 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:59.578 07:33:17 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.578 07:33:17 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.578 07:33:17 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.578 07:33:17 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.578 07:33:17 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.578 07:33:17 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.578 07:33:17 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:59.578 07:33:17 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.578 07:33:17 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:59.578 07:33:17 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:59.578 07:33:17 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:59.578 07:33:17 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.578 07:33:17 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.578 07:33:17 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.578 07:33:17 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:59.578 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:59.578 07:33:17 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:59.578 07:33:17 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:59.578 07:33:17 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:59.578 07:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:59.578 07:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:59.578 07:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:59.578 07:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:59.578 07:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:59.578 07:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:59.578 07:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:59.578 07:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:59.578 07:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:59.578 07:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:59.578 07:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:59.578 INFO: launching applications... 00:05:59.578 07:33:17 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:59.578 07:33:17 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:59.578 07:33:17 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:59.578 07:33:17 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:59.578 07:33:17 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:59.578 07:33:17 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:59.578 07:33:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.578 07:33:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.578 07:33:17 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57625 00:05:59.578 07:33:17 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:59.578 07:33:17 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:59.578 Waiting for target to run... 00:05:59.578 07:33:17 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57625 /var/tmp/spdk_tgt.sock 00:05:59.578 07:33:17 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57625 ']' 00:05:59.578 07:33:17 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:59.578 07:33:17 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:59.578 07:33:17 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:59.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:59.578 07:33:17 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:59.578 07:33:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:59.578 [2024-11-08 07:33:17.520753] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:05:59.578 [2024-11-08 07:33:17.520884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57625 ] 00:06:00.143 [2024-11-08 07:33:17.923390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.143 [2024-11-08 07:33:17.969104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.143 [2024-11-08 07:33:18.000087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.707 00:06:00.707 INFO: shutting down applications... 00:06:00.707 07:33:18 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:00.707 07:33:18 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:06:00.707 07:33:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:00.707 07:33:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:00.707 07:33:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:00.707 07:33:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:00.707 07:33:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:00.707 07:33:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57625 ]] 00:06:00.707 07:33:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57625 00:06:00.707 07:33:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:00.707 07:33:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.707 07:33:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57625 00:06:00.707 07:33:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:01.273 07:33:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:01.273 07:33:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.273 07:33:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57625 00:06:01.273 07:33:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:01.273 SPDK target shutdown done 00:06:01.273 Success 00:06:01.273 07:33:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:01.273 07:33:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:01.273 07:33:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:01.273 07:33:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:01.273 00:06:01.273 real 0m1.794s 00:06:01.273 user 0m1.647s 00:06:01.273 sys 0m0.444s 00:06:01.273 07:33:19 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.273 07:33:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:01.273 ************************************ 00:06:01.273 END TEST json_config_extra_key 00:06:01.273 ************************************ 00:06:01.273 07:33:19 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.273 07:33:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:01.273 07:33:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:01.273 07:33:19 -- common/autotest_common.sh@10 -- # set +x 00:06:01.273 ************************************ 00:06:01.273 START TEST alias_rpc 00:06:01.273 ************************************ 00:06:01.273 07:33:19 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.273 * Looking for test storage... 00:06:01.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:01.273 07:33:19 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:01.273 07:33:19 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:01.273 07:33:19 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:01.532 07:33:19 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:01.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.532 07:33:19 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:01.532 07:33:19 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.532 07:33:19 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:01.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.532 --rc genhtml_branch_coverage=1 00:06:01.532 --rc genhtml_function_coverage=1 00:06:01.532 --rc genhtml_legend=1 00:06:01.532 --rc geninfo_all_blocks=1 00:06:01.532 --rc geninfo_unexecuted_blocks=1 00:06:01.532 00:06:01.532 ' 00:06:01.532 07:33:19 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:01.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.532 --rc genhtml_branch_coverage=1 00:06:01.532 --rc genhtml_function_coverage=1 00:06:01.532 --rc genhtml_legend=1 00:06:01.532 --rc geninfo_all_blocks=1 00:06:01.532 --rc geninfo_unexecuted_blocks=1 00:06:01.532 00:06:01.532 ' 00:06:01.532 07:33:19 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:01.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.532 --rc genhtml_branch_coverage=1 00:06:01.532 --rc genhtml_function_coverage=1 00:06:01.532 --rc genhtml_legend=1 00:06:01.532 --rc geninfo_all_blocks=1 00:06:01.532 --rc geninfo_unexecuted_blocks=1 00:06:01.532 00:06:01.532 ' 00:06:01.532 07:33:19 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:01.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.532 --rc genhtml_branch_coverage=1 00:06:01.532 --rc genhtml_function_coverage=1 00:06:01.532 --rc genhtml_legend=1 00:06:01.532 --rc geninfo_all_blocks=1 00:06:01.532 --rc geninfo_unexecuted_blocks=1 00:06:01.532 00:06:01.532 ' 00:06:01.532 07:33:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:01.532 07:33:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57703 00:06:01.532 07:33:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57703 00:06:01.532 07:33:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.532 07:33:19 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57703 ']' 00:06:01.532 07:33:19 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.532 07:33:19 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:01.532 07:33:19 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.532 07:33:19 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:01.532 07:33:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.532 [2024-11-08 07:33:19.382429] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:01.532 [2024-11-08 07:33:19.382848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57703 ] 00:06:01.790 [2024-11-08 07:33:19.551770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.790 [2024-11-08 07:33:19.615853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.790 [2024-11-08 07:33:19.682073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.048 07:33:19 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:02.048 07:33:19 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:02.048 07:33:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:02.305 07:33:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57703 00:06:02.305 07:33:20 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57703 ']' 00:06:02.305 07:33:20 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57703 00:06:02.305 07:33:20 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:02.305 07:33:20 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.305 07:33:20 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57703 00:06:02.305 07:33:20 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:02.305 07:33:20 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:02.305 07:33:20 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57703' 00:06:02.305 killing process with pid 57703 00:06:02.305 07:33:20 alias_rpc -- common/autotest_common.sh@971 -- # kill 57703 00:06:02.305 07:33:20 alias_rpc -- common/autotest_common.sh@976 -- # wait 57703 00:06:02.870 ************************************ 00:06:02.870 END TEST alias_rpc 00:06:02.870 ************************************ 00:06:02.870 00:06:02.870 real 0m1.455s 00:06:02.870 user 0m1.565s 00:06:02.870 sys 0m0.461s 00:06:02.870 07:33:20 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.870 07:33:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.870 07:33:20 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:02.870 07:33:20 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:02.870 07:33:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:02.870 07:33:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.870 07:33:20 -- common/autotest_common.sh@10 -- # set +x 00:06:02.870 ************************************ 00:06:02.870 START TEST spdkcli_tcp 00:06:02.870 ************************************ 00:06:02.870 07:33:20 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:02.870 * Looking for test storage... 00:06:02.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:02.870 07:33:20 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:02.870 07:33:20 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:02.870 07:33:20 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:02.870 07:33:20 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.870 07:33:20 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:02.870 07:33:20 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.870 07:33:20 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:02.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.870 --rc genhtml_branch_coverage=1 00:06:02.870 --rc genhtml_function_coverage=1 00:06:02.870 --rc genhtml_legend=1 00:06:02.870 --rc geninfo_all_blocks=1 00:06:02.870 --rc geninfo_unexecuted_blocks=1 00:06:02.870 00:06:02.870 ' 00:06:02.870 07:33:20 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:02.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.870 --rc genhtml_branch_coverage=1 00:06:02.870 --rc genhtml_function_coverage=1 00:06:02.870 --rc genhtml_legend=1 00:06:02.870 --rc geninfo_all_blocks=1 00:06:02.870 --rc geninfo_unexecuted_blocks=1 00:06:02.870 00:06:02.870 ' 00:06:02.870 07:33:20 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:02.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.870 --rc genhtml_branch_coverage=1 00:06:02.870 --rc genhtml_function_coverage=1 00:06:02.870 --rc genhtml_legend=1 00:06:02.870 --rc geninfo_all_blocks=1 00:06:02.870 --rc geninfo_unexecuted_blocks=1 00:06:02.870 00:06:02.870 ' 00:06:02.870 07:33:20 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:02.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.870 --rc genhtml_branch_coverage=1 00:06:02.870 --rc genhtml_function_coverage=1 00:06:02.870 --rc genhtml_legend=1 00:06:02.870 --rc geninfo_all_blocks=1 00:06:02.870 --rc geninfo_unexecuted_blocks=1 00:06:02.870 00:06:02.870 ' 00:06:02.870 07:33:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:02.870 07:33:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:02.870 07:33:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:02.870 07:33:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:02.870 07:33:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:02.870 07:33:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:02.870 07:33:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:02.870 07:33:20 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:02.870 07:33:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.870 07:33:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57780 00:06:02.870 07:33:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:02.870 07:33:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57780 00:06:02.870 07:33:20 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57780 ']' 00:06:02.871 07:33:20 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.871 07:33:20 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:02.871 07:33:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.871 07:33:20 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:02.871 07:33:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.127 [2024-11-08 07:33:20.881439] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:03.127 [2024-11-08 07:33:20.882104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57780 ] 00:06:03.127 [2024-11-08 07:33:21.037906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.384 [2024-11-08 07:33:21.103572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.384 [2024-11-08 07:33:21.103584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.384 [2024-11-08 07:33:21.167786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.967 07:33:21 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:03.967 07:33:21 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:03.967 07:33:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57797 00:06:03.967 07:33:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:03.967 07:33:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:04.225 [ 00:06:04.225 "bdev_malloc_delete", 00:06:04.225 "bdev_malloc_create", 00:06:04.225 "bdev_null_resize", 00:06:04.225 "bdev_null_delete", 00:06:04.225 "bdev_null_create", 00:06:04.225 "bdev_nvme_cuse_unregister", 00:06:04.225 "bdev_nvme_cuse_register", 00:06:04.225 "bdev_opal_new_user", 00:06:04.225 "bdev_opal_set_lock_state", 00:06:04.225 "bdev_opal_delete", 00:06:04.225 "bdev_opal_get_info", 00:06:04.225 "bdev_opal_create", 00:06:04.225 "bdev_nvme_opal_revert", 00:06:04.225 "bdev_nvme_opal_init", 00:06:04.225 "bdev_nvme_send_cmd", 00:06:04.225 "bdev_nvme_set_keys", 00:06:04.225 "bdev_nvme_get_path_iostat", 00:06:04.225 "bdev_nvme_get_mdns_discovery_info", 00:06:04.225 "bdev_nvme_stop_mdns_discovery", 00:06:04.225 "bdev_nvme_start_mdns_discovery", 00:06:04.225 "bdev_nvme_set_multipath_policy", 00:06:04.225 "bdev_nvme_set_preferred_path", 00:06:04.225 "bdev_nvme_get_io_paths", 00:06:04.225 "bdev_nvme_remove_error_injection", 00:06:04.225 "bdev_nvme_add_error_injection", 00:06:04.225 "bdev_nvme_get_discovery_info", 00:06:04.225 "bdev_nvme_stop_discovery", 00:06:04.225 "bdev_nvme_start_discovery", 00:06:04.225 "bdev_nvme_get_controller_health_info", 00:06:04.225 "bdev_nvme_disable_controller", 00:06:04.225 "bdev_nvme_enable_controller", 00:06:04.225 "bdev_nvme_reset_controller", 00:06:04.225 "bdev_nvme_get_transport_statistics", 00:06:04.225 "bdev_nvme_apply_firmware", 00:06:04.225 "bdev_nvme_detach_controller", 00:06:04.225 "bdev_nvme_get_controllers", 00:06:04.225 "bdev_nvme_attach_controller", 00:06:04.225 "bdev_nvme_set_hotplug", 00:06:04.225 "bdev_nvme_set_options", 00:06:04.225 "bdev_passthru_delete", 00:06:04.225 "bdev_passthru_create", 00:06:04.225 "bdev_lvol_set_parent_bdev", 00:06:04.225 "bdev_lvol_set_parent", 00:06:04.225 "bdev_lvol_check_shallow_copy", 00:06:04.225 "bdev_lvol_start_shallow_copy", 00:06:04.225 "bdev_lvol_grow_lvstore", 00:06:04.225 "bdev_lvol_get_lvols", 00:06:04.225 "bdev_lvol_get_lvstores", 00:06:04.225 "bdev_lvol_delete", 00:06:04.225 "bdev_lvol_set_read_only", 00:06:04.225 "bdev_lvol_resize", 00:06:04.225 "bdev_lvol_decouple_parent", 00:06:04.225 "bdev_lvol_inflate", 00:06:04.225 "bdev_lvol_rename", 00:06:04.225 "bdev_lvol_clone_bdev", 00:06:04.225 "bdev_lvol_clone", 00:06:04.225 "bdev_lvol_snapshot", 00:06:04.225 "bdev_lvol_create", 00:06:04.225 "bdev_lvol_delete_lvstore", 00:06:04.225 "bdev_lvol_rename_lvstore", 00:06:04.225 "bdev_lvol_create_lvstore", 00:06:04.225 "bdev_raid_set_options", 00:06:04.225 "bdev_raid_remove_base_bdev", 00:06:04.225 "bdev_raid_add_base_bdev", 00:06:04.225 "bdev_raid_delete", 00:06:04.225 "bdev_raid_create", 00:06:04.225 "bdev_raid_get_bdevs", 00:06:04.225 "bdev_error_inject_error", 00:06:04.225 "bdev_error_delete", 00:06:04.225 "bdev_error_create", 00:06:04.225 "bdev_split_delete", 00:06:04.225 "bdev_split_create", 00:06:04.225 "bdev_delay_delete", 00:06:04.225 "bdev_delay_create", 00:06:04.225 "bdev_delay_update_latency", 00:06:04.225 "bdev_zone_block_delete", 00:06:04.225 "bdev_zone_block_create", 00:06:04.225 "blobfs_create", 00:06:04.225 "blobfs_detect", 00:06:04.225 "blobfs_set_cache_size", 00:06:04.225 "bdev_aio_delete", 00:06:04.225 "bdev_aio_rescan", 00:06:04.225 "bdev_aio_create", 00:06:04.225 "bdev_ftl_set_property", 00:06:04.225 "bdev_ftl_get_properties", 00:06:04.225 "bdev_ftl_get_stats", 00:06:04.225 "bdev_ftl_unmap", 00:06:04.225 "bdev_ftl_unload", 00:06:04.225 "bdev_ftl_delete", 00:06:04.225 "bdev_ftl_load", 00:06:04.225 "bdev_ftl_create", 00:06:04.225 "bdev_virtio_attach_controller", 00:06:04.225 "bdev_virtio_scsi_get_devices", 00:06:04.225 "bdev_virtio_detach_controller", 00:06:04.225 "bdev_virtio_blk_set_hotplug", 00:06:04.225 "bdev_iscsi_delete", 00:06:04.225 "bdev_iscsi_create", 00:06:04.225 "bdev_iscsi_set_options", 00:06:04.225 "bdev_uring_delete", 00:06:04.225 "bdev_uring_rescan", 00:06:04.225 "bdev_uring_create", 00:06:04.225 "accel_error_inject_error", 00:06:04.225 "ioat_scan_accel_module", 00:06:04.225 "dsa_scan_accel_module", 00:06:04.225 "iaa_scan_accel_module", 00:06:04.225 "keyring_file_remove_key", 00:06:04.225 "keyring_file_add_key", 00:06:04.225 "keyring_linux_set_options", 00:06:04.225 "fsdev_aio_delete", 00:06:04.225 "fsdev_aio_create", 00:06:04.225 "iscsi_get_histogram", 00:06:04.225 "iscsi_enable_histogram", 00:06:04.225 "iscsi_set_options", 00:06:04.225 "iscsi_get_auth_groups", 00:06:04.225 "iscsi_auth_group_remove_secret", 00:06:04.225 "iscsi_auth_group_add_secret", 00:06:04.225 "iscsi_delete_auth_group", 00:06:04.225 "iscsi_create_auth_group", 00:06:04.225 "iscsi_set_discovery_auth", 00:06:04.225 "iscsi_get_options", 00:06:04.225 "iscsi_target_node_request_logout", 00:06:04.225 "iscsi_target_node_set_redirect", 00:06:04.225 "iscsi_target_node_set_auth", 00:06:04.225 "iscsi_target_node_add_lun", 00:06:04.225 "iscsi_get_stats", 00:06:04.225 "iscsi_get_connections", 00:06:04.225 "iscsi_portal_group_set_auth", 00:06:04.225 "iscsi_start_portal_group", 00:06:04.225 "iscsi_delete_portal_group", 00:06:04.225 "iscsi_create_portal_group", 00:06:04.225 "iscsi_get_portal_groups", 00:06:04.225 "iscsi_delete_target_node", 00:06:04.225 "iscsi_target_node_remove_pg_ig_maps", 00:06:04.225 "iscsi_target_node_add_pg_ig_maps", 00:06:04.225 "iscsi_create_target_node", 00:06:04.225 "iscsi_get_target_nodes", 00:06:04.225 "iscsi_delete_initiator_group", 00:06:04.225 "iscsi_initiator_group_remove_initiators", 00:06:04.225 "iscsi_initiator_group_add_initiators", 00:06:04.225 "iscsi_create_initiator_group", 00:06:04.225 "iscsi_get_initiator_groups", 00:06:04.225 "nvmf_set_crdt", 00:06:04.225 "nvmf_set_config", 00:06:04.225 "nvmf_set_max_subsystems", 00:06:04.225 "nvmf_stop_mdns_prr", 00:06:04.225 "nvmf_publish_mdns_prr", 00:06:04.225 "nvmf_subsystem_get_listeners", 00:06:04.225 "nvmf_subsystem_get_qpairs", 00:06:04.225 "nvmf_subsystem_get_controllers", 00:06:04.225 "nvmf_get_stats", 00:06:04.225 "nvmf_get_transports", 00:06:04.225 "nvmf_create_transport", 00:06:04.225 "nvmf_get_targets", 00:06:04.225 "nvmf_delete_target", 00:06:04.225 "nvmf_create_target", 00:06:04.225 "nvmf_subsystem_allow_any_host", 00:06:04.225 "nvmf_subsystem_set_keys", 00:06:04.226 "nvmf_subsystem_remove_host", 00:06:04.226 "nvmf_subsystem_add_host", 00:06:04.226 "nvmf_ns_remove_host", 00:06:04.226 "nvmf_ns_add_host", 00:06:04.226 "nvmf_subsystem_remove_ns", 00:06:04.226 "nvmf_subsystem_set_ns_ana_group", 00:06:04.226 "nvmf_subsystem_add_ns", 00:06:04.226 "nvmf_subsystem_listener_set_ana_state", 00:06:04.226 "nvmf_discovery_get_referrals", 00:06:04.226 "nvmf_discovery_remove_referral", 00:06:04.226 "nvmf_discovery_add_referral", 00:06:04.226 "nvmf_subsystem_remove_listener", 00:06:04.226 "nvmf_subsystem_add_listener", 00:06:04.226 "nvmf_delete_subsystem", 00:06:04.226 "nvmf_create_subsystem", 00:06:04.226 "nvmf_get_subsystems", 00:06:04.226 "env_dpdk_get_mem_stats", 00:06:04.226 "nbd_get_disks", 00:06:04.226 "nbd_stop_disk", 00:06:04.226 "nbd_start_disk", 00:06:04.226 "ublk_recover_disk", 00:06:04.226 "ublk_get_disks", 00:06:04.226 "ublk_stop_disk", 00:06:04.226 "ublk_start_disk", 00:06:04.226 "ublk_destroy_target", 00:06:04.226 "ublk_create_target", 00:06:04.226 "virtio_blk_create_transport", 00:06:04.226 "virtio_blk_get_transports", 00:06:04.226 "vhost_controller_set_coalescing", 00:06:04.226 "vhost_get_controllers", 00:06:04.226 "vhost_delete_controller", 00:06:04.226 "vhost_create_blk_controller", 00:06:04.226 "vhost_scsi_controller_remove_target", 00:06:04.226 "vhost_scsi_controller_add_target", 00:06:04.226 "vhost_start_scsi_controller", 00:06:04.226 "vhost_create_scsi_controller", 00:06:04.226 "thread_set_cpumask", 00:06:04.226 "scheduler_set_options", 00:06:04.226 "framework_get_governor", 00:06:04.226 "framework_get_scheduler", 00:06:04.226 "framework_set_scheduler", 00:06:04.226 "framework_get_reactors", 00:06:04.226 "thread_get_io_channels", 00:06:04.226 "thread_get_pollers", 00:06:04.226 "thread_get_stats", 00:06:04.226 "framework_monitor_context_switch", 00:06:04.226 "spdk_kill_instance", 00:06:04.226 "log_enable_timestamps", 00:06:04.226 "log_get_flags", 00:06:04.226 "log_clear_flag", 00:06:04.226 "log_set_flag", 00:06:04.226 "log_get_level", 00:06:04.226 "log_set_level", 00:06:04.226 "log_get_print_level", 00:06:04.226 "log_set_print_level", 00:06:04.226 "framework_enable_cpumask_locks", 00:06:04.226 "framework_disable_cpumask_locks", 00:06:04.226 "framework_wait_init", 00:06:04.226 "framework_start_init", 00:06:04.226 "scsi_get_devices", 00:06:04.226 "bdev_get_histogram", 00:06:04.226 "bdev_enable_histogram", 00:06:04.226 "bdev_set_qos_limit", 00:06:04.226 "bdev_set_qd_sampling_period", 00:06:04.226 "bdev_get_bdevs", 00:06:04.226 "bdev_reset_iostat", 00:06:04.226 "bdev_get_iostat", 00:06:04.226 "bdev_examine", 00:06:04.226 "bdev_wait_for_examine", 00:06:04.226 "bdev_set_options", 00:06:04.226 "accel_get_stats", 00:06:04.226 "accel_set_options", 00:06:04.226 "accel_set_driver", 00:06:04.226 "accel_crypto_key_destroy", 00:06:04.226 "accel_crypto_keys_get", 00:06:04.226 "accel_crypto_key_create", 00:06:04.226 "accel_assign_opc", 00:06:04.226 "accel_get_module_info", 00:06:04.226 "accel_get_opc_assignments", 00:06:04.226 "vmd_rescan", 00:06:04.226 "vmd_remove_device", 00:06:04.226 "vmd_enable", 00:06:04.226 "sock_get_default_impl", 00:06:04.226 "sock_set_default_impl", 00:06:04.226 "sock_impl_set_options", 00:06:04.226 "sock_impl_get_options", 00:06:04.226 "iobuf_get_stats", 00:06:04.226 "iobuf_set_options", 00:06:04.226 "keyring_get_keys", 00:06:04.226 "framework_get_pci_devices", 00:06:04.226 "framework_get_config", 00:06:04.226 "framework_get_subsystems", 00:06:04.226 "fsdev_set_opts", 00:06:04.226 "fsdev_get_opts", 00:06:04.226 "trace_get_info", 00:06:04.226 "trace_get_tpoint_group_mask", 00:06:04.226 "trace_disable_tpoint_group", 00:06:04.226 "trace_enable_tpoint_group", 00:06:04.226 "trace_clear_tpoint_mask", 00:06:04.226 "trace_set_tpoint_mask", 00:06:04.226 "notify_get_notifications", 00:06:04.226 "notify_get_types", 00:06:04.226 "spdk_get_version", 00:06:04.226 "rpc_get_methods" 00:06:04.226 ] 00:06:04.226 07:33:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:04.226 07:33:21 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.226 07:33:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.226 07:33:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:04.226 07:33:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57780 00:06:04.226 07:33:22 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57780 ']' 00:06:04.226 07:33:22 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57780 00:06:04.226 07:33:22 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:04.226 07:33:22 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:04.226 07:33:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57780 00:06:04.226 killing process with pid 57780 00:06:04.226 07:33:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:04.226 07:33:22 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:04.226 07:33:22 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57780' 00:06:04.226 07:33:22 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57780 00:06:04.226 07:33:22 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57780 00:06:04.484 ************************************ 00:06:04.484 END TEST spdkcli_tcp 00:06:04.484 ************************************ 00:06:04.484 00:06:04.484 real 0m1.772s 00:06:04.484 user 0m3.146s 00:06:04.484 sys 0m0.482s 00:06:04.484 07:33:22 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:04.484 07:33:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.484 07:33:22 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:04.484 07:33:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:04.484 07:33:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:04.484 07:33:22 -- common/autotest_common.sh@10 -- # set +x 00:06:04.742 ************************************ 00:06:04.742 START TEST dpdk_mem_utility 00:06:04.742 ************************************ 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:04.742 * Looking for test storage... 00:06:04.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:04.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.742 07:33:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:04.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.742 --rc genhtml_branch_coverage=1 00:06:04.742 --rc genhtml_function_coverage=1 00:06:04.742 --rc genhtml_legend=1 00:06:04.742 --rc geninfo_all_blocks=1 00:06:04.742 --rc geninfo_unexecuted_blocks=1 00:06:04.742 00:06:04.742 ' 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:04.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.742 --rc genhtml_branch_coverage=1 00:06:04.742 --rc genhtml_function_coverage=1 00:06:04.742 --rc genhtml_legend=1 00:06:04.742 --rc geninfo_all_blocks=1 00:06:04.742 --rc geninfo_unexecuted_blocks=1 00:06:04.742 00:06:04.742 ' 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:04.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.742 --rc genhtml_branch_coverage=1 00:06:04.742 --rc genhtml_function_coverage=1 00:06:04.742 --rc genhtml_legend=1 00:06:04.742 --rc geninfo_all_blocks=1 00:06:04.742 --rc geninfo_unexecuted_blocks=1 00:06:04.742 00:06:04.742 ' 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:04.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.742 --rc genhtml_branch_coverage=1 00:06:04.742 --rc genhtml_function_coverage=1 00:06:04.742 --rc genhtml_legend=1 00:06:04.742 --rc geninfo_all_blocks=1 00:06:04.742 --rc geninfo_unexecuted_blocks=1 00:06:04.742 00:06:04.742 ' 00:06:04.742 07:33:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:04.742 07:33:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57873 00:06:04.742 07:33:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57873 00:06:04.742 07:33:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57873 ']' 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:04.742 07:33:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.742 [2024-11-08 07:33:22.699667] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:04.742 [2024-11-08 07:33:22.699889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57873 ] 00:06:05.000 [2024-11-08 07:33:22.842488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.000 [2024-11-08 07:33:22.895650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.000 [2024-11-08 07:33:22.953794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.258 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:05.258 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:05.258 07:33:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:05.258 07:33:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:05.258 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.258 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.258 { 00:06:05.258 "filename": "/tmp/spdk_mem_dump.txt" 00:06:05.258 } 00:06:05.258 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.258 07:33:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:05.258 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:05.258 1 heaps totaling size 810.000000 MiB 00:06:05.258 size: 810.000000 MiB heap id: 0 00:06:05.258 end heaps---------- 00:06:05.258 9 mempools totaling size 595.772034 MiB 00:06:05.258 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:05.258 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:05.258 size: 92.545471 MiB name: bdev_io_57873 00:06:05.258 size: 50.003479 MiB name: msgpool_57873 00:06:05.258 size: 36.509338 MiB name: fsdev_io_57873 00:06:05.258 size: 21.763794 MiB name: PDU_Pool 00:06:05.258 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:05.258 size: 4.133484 MiB name: evtpool_57873 00:06:05.258 size: 0.026123 MiB name: Session_Pool 00:06:05.258 end mempools------- 00:06:05.258 6 memzones totaling size 4.142822 MiB 00:06:05.258 size: 1.000366 MiB name: RG_ring_0_57873 00:06:05.258 size: 1.000366 MiB name: RG_ring_1_57873 00:06:05.258 size: 1.000366 MiB name: RG_ring_4_57873 00:06:05.258 size: 1.000366 MiB name: RG_ring_5_57873 00:06:05.258 size: 0.125366 MiB name: RG_ring_2_57873 00:06:05.258 size: 0.015991 MiB name: RG_ring_3_57873 00:06:05.258 end memzones------- 00:06:05.258 07:33:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:05.518 heap id: 0 total size: 810.000000 MiB number of busy elements: 314 number of free elements: 15 00:06:05.518 list of free elements. size: 10.813049 MiB 00:06:05.518 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:05.518 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:05.518 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:05.518 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:05.518 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:05.518 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:05.518 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:05.518 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:05.519 element at address: 0x20001a600000 with size: 0.566589 MiB 00:06:05.519 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:05.519 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:05.519 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:05.519 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:05.519 element at address: 0x200027a00000 with size: 0.396667 MiB 00:06:05.519 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:05.519 list of standard malloc elements. size: 199.268066 MiB 00:06:05.519 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:05.519 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:05.519 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:05.519 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:05.519 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:05.519 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:05.519 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:05.519 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:05.519 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:05.519 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:05.519 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:05.519 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6910c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691180 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691240 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691300 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691480 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691540 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691600 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:05.520 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a658c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a65980 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6c580 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:05.520 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:05.521 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:05.521 list of memzone associated elements. size: 599.918884 MiB 00:06:05.521 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:05.521 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:05.521 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:05.521 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:05.521 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:05.521 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57873_0 00:06:05.521 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:05.521 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57873_0 00:06:05.521 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:05.521 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57873_0 00:06:05.521 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:05.521 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:05.521 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:05.521 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:05.521 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:05.521 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57873_0 00:06:05.521 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:05.521 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57873 00:06:05.521 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:05.521 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57873 00:06:05.521 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:05.521 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:05.521 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:05.521 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:05.521 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:05.521 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:05.521 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:05.521 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:05.521 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:05.521 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57873 00:06:05.521 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:05.521 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57873 00:06:05.521 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:05.521 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57873 00:06:05.521 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:05.521 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57873 00:06:05.521 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:05.521 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57873 00:06:05.521 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:05.521 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57873 00:06:05.521 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:05.521 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:05.521 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:05.521 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:05.521 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:05.521 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:05.521 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:05.521 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57873 00:06:05.521 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:05.521 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57873 00:06:05.521 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:05.521 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:05.521 element at address: 0x200027a65a40 with size: 0.023743 MiB 00:06:05.521 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:05.521 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:05.521 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57873 00:06:05.521 element at address: 0x200027a6bb80 with size: 0.002441 MiB 00:06:05.521 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:05.521 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:05.521 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57873 00:06:05.521 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:05.521 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57873 00:06:05.521 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:05.521 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57873 00:06:05.521 element at address: 0x200027a6c640 with size: 0.000305 MiB 00:06:05.521 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:05.521 07:33:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:05.521 07:33:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57873 00:06:05.521 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57873 ']' 00:06:05.521 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57873 00:06:05.521 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:05.521 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:05.521 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57873 00:06:05.521 killing process with pid 57873 00:06:05.521 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:05.521 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:05.521 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57873' 00:06:05.521 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57873 00:06:05.521 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57873 00:06:05.787 00:06:05.787 real 0m1.162s 00:06:05.787 user 0m1.120s 00:06:05.787 sys 0m0.384s 00:06:05.787 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.787 ************************************ 00:06:05.787 END TEST dpdk_mem_utility 00:06:05.787 ************************************ 00:06:05.787 07:33:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.787 07:33:23 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:05.787 07:33:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:05.787 07:33:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.787 07:33:23 -- common/autotest_common.sh@10 -- # set +x 00:06:05.787 ************************************ 00:06:05.787 START TEST event 00:06:05.787 ************************************ 00:06:05.787 07:33:23 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:06.058 * Looking for test storage... 00:06:06.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:06.058 07:33:23 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:06.058 07:33:23 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:06.058 07:33:23 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:06.058 07:33:23 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:06.058 07:33:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.058 07:33:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.058 07:33:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.058 07:33:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.058 07:33:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.058 07:33:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.058 07:33:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.058 07:33:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.058 07:33:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.058 07:33:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.058 07:33:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.058 07:33:23 event -- scripts/common.sh@344 -- # case "$op" in 00:06:06.058 07:33:23 event -- scripts/common.sh@345 -- # : 1 00:06:06.058 07:33:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.058 07:33:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.058 07:33:23 event -- scripts/common.sh@365 -- # decimal 1 00:06:06.058 07:33:23 event -- scripts/common.sh@353 -- # local d=1 00:06:06.058 07:33:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.058 07:33:23 event -- scripts/common.sh@355 -- # echo 1 00:06:06.058 07:33:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.058 07:33:23 event -- scripts/common.sh@366 -- # decimal 2 00:06:06.058 07:33:23 event -- scripts/common.sh@353 -- # local d=2 00:06:06.058 07:33:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.058 07:33:23 event -- scripts/common.sh@355 -- # echo 2 00:06:06.058 07:33:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.058 07:33:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.058 07:33:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.058 07:33:23 event -- scripts/common.sh@368 -- # return 0 00:06:06.058 07:33:23 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.058 07:33:23 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:06.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.058 --rc genhtml_branch_coverage=1 00:06:06.058 --rc genhtml_function_coverage=1 00:06:06.058 --rc genhtml_legend=1 00:06:06.058 --rc geninfo_all_blocks=1 00:06:06.058 --rc geninfo_unexecuted_blocks=1 00:06:06.058 00:06:06.058 ' 00:06:06.058 07:33:23 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:06.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.058 --rc genhtml_branch_coverage=1 00:06:06.058 --rc genhtml_function_coverage=1 00:06:06.058 --rc genhtml_legend=1 00:06:06.058 --rc geninfo_all_blocks=1 00:06:06.058 --rc geninfo_unexecuted_blocks=1 00:06:06.058 00:06:06.058 ' 00:06:06.058 07:33:23 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:06.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.058 --rc genhtml_branch_coverage=1 00:06:06.058 --rc genhtml_function_coverage=1 00:06:06.058 --rc genhtml_legend=1 00:06:06.058 --rc geninfo_all_blocks=1 00:06:06.058 --rc geninfo_unexecuted_blocks=1 00:06:06.058 00:06:06.058 ' 00:06:06.058 07:33:23 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:06.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.058 --rc genhtml_branch_coverage=1 00:06:06.058 --rc genhtml_function_coverage=1 00:06:06.058 --rc genhtml_legend=1 00:06:06.058 --rc geninfo_all_blocks=1 00:06:06.058 --rc geninfo_unexecuted_blocks=1 00:06:06.058 00:06:06.058 ' 00:06:06.058 07:33:23 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:06.058 07:33:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:06.058 07:33:23 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.058 07:33:23 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:06.058 07:33:23 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:06.058 07:33:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.058 ************************************ 00:06:06.058 START TEST event_perf 00:06:06.058 ************************************ 00:06:06.058 07:33:23 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.058 Running I/O for 1 seconds...[2024-11-08 07:33:23.909622] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:06.058 [2024-11-08 07:33:23.909709] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57951 ] 00:06:06.317 [2024-11-08 07:33:24.058358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.317 [2024-11-08 07:33:24.110274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.317 [2024-11-08 07:33:24.110349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.317 [2024-11-08 07:33:24.110491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.317 Running I/O for 1 seconds...[2024-11-08 07:33:24.110491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.251 00:06:07.251 lcore 0: 195174 00:06:07.251 lcore 1: 195172 00:06:07.251 lcore 2: 195174 00:06:07.251 lcore 3: 195173 00:06:07.251 done. 00:06:07.251 00:06:07.251 real 0m1.266s 00:06:07.251 user 0m4.092s 00:06:07.251 sys 0m0.054s 00:06:07.251 07:33:25 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:07.251 07:33:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.251 ************************************ 00:06:07.251 END TEST event_perf 00:06:07.251 ************************************ 00:06:07.251 07:33:25 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:07.251 07:33:25 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:07.251 07:33:25 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:07.251 07:33:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.509 ************************************ 00:06:07.509 START TEST event_reactor 00:06:07.509 ************************************ 00:06:07.509 07:33:25 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:07.509 [2024-11-08 07:33:25.235697] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:07.509 [2024-11-08 07:33:25.235870] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57984 ] 00:06:07.509 [2024-11-08 07:33:25.378070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.509 [2024-11-08 07:33:25.430322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.909 test_start 00:06:08.909 oneshot 00:06:08.909 tick 100 00:06:08.909 tick 100 00:06:08.909 tick 250 00:06:08.909 tick 100 00:06:08.909 tick 100 00:06:08.909 tick 100 00:06:08.909 tick 250 00:06:08.909 tick 500 00:06:08.909 tick 100 00:06:08.909 tick 100 00:06:08.909 tick 250 00:06:08.909 tick 100 00:06:08.909 tick 100 00:06:08.909 test_end 00:06:08.909 00:06:08.909 real 0m1.257s 00:06:08.909 user 0m1.112s 00:06:08.909 sys 0m0.039s 00:06:08.909 07:33:26 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.909 07:33:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:08.909 ************************************ 00:06:08.909 END TEST event_reactor 00:06:08.909 ************************************ 00:06:08.909 07:33:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:08.909 07:33:26 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:08.909 07:33:26 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.909 07:33:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.909 ************************************ 00:06:08.909 START TEST event_reactor_perf 00:06:08.909 ************************************ 00:06:08.909 07:33:26 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:08.909 [2024-11-08 07:33:26.553038] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:08.909 [2024-11-08 07:33:26.553131] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58019 ] 00:06:08.909 [2024-11-08 07:33:26.704029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.909 [2024-11-08 07:33:26.754948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.843 test_start 00:06:09.843 test_end 00:06:09.843 Performance: 454563 events per second 00:06:09.843 00:06:09.843 real 0m1.266s 00:06:09.843 user 0m1.104s 00:06:09.843 sys 0m0.056s 00:06:09.843 ************************************ 00:06:09.843 END TEST event_reactor_perf 00:06:09.843 ************************************ 00:06:09.843 07:33:27 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.843 07:33:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.102 07:33:27 event -- event/event.sh@49 -- # uname -s 00:06:10.102 07:33:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:10.102 07:33:27 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:10.102 07:33:27 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:10.102 07:33:27 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.102 07:33:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.102 ************************************ 00:06:10.102 START TEST event_scheduler 00:06:10.102 ************************************ 00:06:10.102 07:33:27 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:10.102 * Looking for test storage... 00:06:10.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:10.102 07:33:27 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:10.102 07:33:27 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:10.102 07:33:27 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:10.102 07:33:28 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.102 07:33:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:10.360 07:33:28 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.360 07:33:28 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:10.360 07:33:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:10.360 07:33:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.360 07:33:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:10.360 07:33:28 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.360 07:33:28 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.360 07:33:28 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.360 07:33:28 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:10.360 07:33:28 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.360 07:33:28 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:10.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.360 --rc genhtml_branch_coverage=1 00:06:10.360 --rc genhtml_function_coverage=1 00:06:10.360 --rc genhtml_legend=1 00:06:10.360 --rc geninfo_all_blocks=1 00:06:10.360 --rc geninfo_unexecuted_blocks=1 00:06:10.360 00:06:10.360 ' 00:06:10.360 07:33:28 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:10.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.360 --rc genhtml_branch_coverage=1 00:06:10.360 --rc genhtml_function_coverage=1 00:06:10.360 --rc genhtml_legend=1 00:06:10.360 --rc geninfo_all_blocks=1 00:06:10.360 --rc geninfo_unexecuted_blocks=1 00:06:10.360 00:06:10.360 ' 00:06:10.360 07:33:28 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:10.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.360 --rc genhtml_branch_coverage=1 00:06:10.360 --rc genhtml_function_coverage=1 00:06:10.360 --rc genhtml_legend=1 00:06:10.360 --rc geninfo_all_blocks=1 00:06:10.360 --rc geninfo_unexecuted_blocks=1 00:06:10.360 00:06:10.360 ' 00:06:10.360 07:33:28 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:10.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.360 --rc genhtml_branch_coverage=1 00:06:10.360 --rc genhtml_function_coverage=1 00:06:10.360 --rc genhtml_legend=1 00:06:10.360 --rc geninfo_all_blocks=1 00:06:10.360 --rc geninfo_unexecuted_blocks=1 00:06:10.360 00:06:10.360 ' 00:06:10.360 07:33:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:10.360 07:33:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58089 00:06:10.360 07:33:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.360 07:33:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:10.360 07:33:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58089 00:06:10.360 07:33:28 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58089 ']' 00:06:10.360 07:33:28 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.360 07:33:28 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:10.360 07:33:28 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.360 07:33:28 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:10.360 07:33:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.360 [2024-11-08 07:33:28.125443] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:10.360 [2024-11-08 07:33:28.126308] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58089 ] 00:06:10.360 [2024-11-08 07:33:28.292287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.618 [2024-11-08 07:33:28.360036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.618 [2024-11-08 07:33:28.360200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.618 [2024-11-08 07:33:28.360385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.619 [2024-11-08 07:33:28.360382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.553 07:33:29 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:11.553 07:33:29 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:11.553 07:33:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:11.553 07:33:29 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.553 07:33:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.553 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.553 POWER: Cannot set governor of lcore 0 to userspace 00:06:11.553 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.553 POWER: Cannot set governor of lcore 0 to performance 00:06:11.553 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.553 POWER: Cannot set governor of lcore 0 to userspace 00:06:11.553 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:11.553 POWER: Cannot set governor of lcore 0 to userspace 00:06:11.553 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:11.554 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:11.554 POWER: Unable to set Power Management Environment for lcore 0 00:06:11.554 [2024-11-08 07:33:29.206132] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:11.554 [2024-11-08 07:33:29.206147] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:11.554 [2024-11-08 07:33:29.206167] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:11.554 [2024-11-08 07:33:29.206183] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:11.554 [2024-11-08 07:33:29.206191] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:11.554 [2024-11-08 07:33:29.206199] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:11.554 07:33:29 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.554 07:33:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:11.554 07:33:29 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.554 07:33:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.554 [2024-11-08 07:33:29.256545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.554 [2024-11-08 07:33:29.286214] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:11.554 07:33:29 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.554 07:33:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:11.554 07:33:29 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:11.554 07:33:29 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:11.554 07:33:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.554 ************************************ 00:06:11.554 START TEST scheduler_create_thread 00:06:11.554 ************************************ 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.554 2 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.554 3 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.554 4 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.554 5 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.554 6 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.554 7 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.554 8 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.554 9 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.554 10 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.554 07:33:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.024 07:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.024 07:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:13.024 07:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:13.024 07:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.024 07:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.958 ************************************ 00:06:13.958 END TEST scheduler_create_thread 00:06:13.958 ************************************ 00:06:13.958 07:33:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.958 00:06:13.958 real 0m2.615s 00:06:13.958 user 0m0.022s 00:06:13.958 sys 0m0.007s 00:06:13.958 07:33:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:13.958 07:33:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.215 07:33:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:14.215 07:33:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58089 00:06:14.215 07:33:31 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58089 ']' 00:06:14.215 07:33:31 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58089 00:06:14.215 07:33:31 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:14.215 07:33:31 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:14.215 07:33:31 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58089 00:06:14.215 killing process with pid 58089 00:06:14.215 07:33:32 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:14.215 07:33:32 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:14.215 07:33:32 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58089' 00:06:14.215 07:33:32 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58089 00:06:14.215 07:33:32 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58089 00:06:14.473 [2024-11-08 07:33:32.393572] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:14.732 00:06:14.732 real 0m4.711s 00:06:14.732 user 0m9.136s 00:06:14.732 sys 0m0.430s 00:06:14.732 07:33:32 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:14.732 ************************************ 00:06:14.732 END TEST event_scheduler 00:06:14.732 ************************************ 00:06:14.732 07:33:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.732 07:33:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:14.732 07:33:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:14.732 07:33:32 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:14.732 07:33:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.732 07:33:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.732 ************************************ 00:06:14.732 START TEST app_repeat 00:06:14.732 ************************************ 00:06:14.732 07:33:32 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:14.732 07:33:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.732 07:33:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.732 07:33:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:14.732 07:33:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.732 07:33:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:14.732 07:33:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:14.732 07:33:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:14.732 07:33:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58183 00:06:14.732 07:33:32 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:14.732 07:33:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.732 Process app_repeat pid: 58183 00:06:14.732 spdk_app_start Round 0 00:06:14.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.732 07:33:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58183' 00:06:14.732 07:33:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.732 07:33:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:14.732 07:33:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58183 /var/tmp/spdk-nbd.sock 00:06:14.732 07:33:32 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58183 ']' 00:06:14.732 07:33:32 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.732 07:33:32 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:14.732 07:33:32 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.732 07:33:32 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:14.732 07:33:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.732 [2024-11-08 07:33:32.679151] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:14.732 [2024-11-08 07:33:32.679239] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58183 ] 00:06:14.991 [2024-11-08 07:33:32.828594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.991 [2024-11-08 07:33:32.882787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.991 [2024-11-08 07:33:32.882789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.991 [2024-11-08 07:33:32.925881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.249 07:33:32 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:15.249 07:33:32 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:15.249 07:33:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.249 Malloc0 00:06:15.249 07:33:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.816 Malloc1 00:06:15.816 07:33:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.816 /dev/nbd0 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.816 07:33:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.816 07:33:33 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:15.816 07:33:33 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:15.816 07:33:33 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:15.816 07:33:33 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:15.816 07:33:33 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:15.816 07:33:33 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:15.816 07:33:33 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:15.816 07:33:33 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:15.816 07:33:33 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.816 1+0 records in 00:06:15.816 1+0 records out 00:06:15.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203257 s, 20.2 MB/s 00:06:15.816 07:33:33 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.074 07:33:33 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:16.074 07:33:33 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.074 07:33:33 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:16.074 07:33:33 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:16.074 07:33:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.074 07:33:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.074 07:33:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.333 /dev/nbd1 00:06:16.333 07:33:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.333 07:33:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.333 07:33:34 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:16.333 07:33:34 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:16.333 07:33:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:16.333 07:33:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:16.333 07:33:34 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:16.333 07:33:34 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:16.333 07:33:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:16.333 07:33:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:16.333 07:33:34 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.333 1+0 records in 00:06:16.333 1+0 records out 00:06:16.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325326 s, 12.6 MB/s 00:06:16.333 07:33:34 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.333 07:33:34 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:16.333 07:33:34 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.333 07:33:34 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:16.333 07:33:34 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:16.333 07:33:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.333 07:33:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.333 07:33:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.333 07:33:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.333 07:33:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.591 { 00:06:16.591 "nbd_device": "/dev/nbd0", 00:06:16.591 "bdev_name": "Malloc0" 00:06:16.591 }, 00:06:16.591 { 00:06:16.591 "nbd_device": "/dev/nbd1", 00:06:16.591 "bdev_name": "Malloc1" 00:06:16.591 } 00:06:16.591 ]' 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.591 { 00:06:16.591 "nbd_device": "/dev/nbd0", 00:06:16.591 "bdev_name": "Malloc0" 00:06:16.591 }, 00:06:16.591 { 00:06:16.591 "nbd_device": "/dev/nbd1", 00:06:16.591 "bdev_name": "Malloc1" 00:06:16.591 } 00:06:16.591 ]' 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.591 /dev/nbd1' 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.591 /dev/nbd1' 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.591 256+0 records in 00:06:16.591 256+0 records out 00:06:16.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105872 s, 99.0 MB/s 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.591 256+0 records in 00:06:16.591 256+0 records out 00:06:16.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280408 s, 37.4 MB/s 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.591 256+0 records in 00:06:16.591 256+0 records out 00:06:16.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252912 s, 41.5 MB/s 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.591 07:33:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.850 07:33:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.850 07:33:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.850 07:33:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.850 07:33:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.850 07:33:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.850 07:33:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.850 07:33:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.850 07:33:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.109 07:33:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.109 07:33:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.109 07:33:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.109 07:33:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.109 07:33:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.109 07:33:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.109 07:33:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.109 07:33:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.109 07:33:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.109 07:33:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.109 07:33:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.109 07:33:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.109 07:33:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.109 07:33:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.109 07:33:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.109 07:33:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.109 07:33:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.109 07:33:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.109 07:33:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.109 07:33:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.109 07:33:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.676 07:33:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.676 07:33:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.676 07:33:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.676 07:33:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.676 07:33:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.676 07:33:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.676 07:33:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.676 07:33:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.676 07:33:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.676 07:33:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.676 07:33:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.676 07:33:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.676 07:33:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.935 07:33:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:17.935 [2024-11-08 07:33:35.814956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.935 [2024-11-08 07:33:35.868290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.935 [2024-11-08 07:33:35.868298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.194 [2024-11-08 07:33:35.910837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.194 [2024-11-08 07:33:35.910906] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.194 [2024-11-08 07:33:35.910917] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.772 spdk_app_start Round 1 00:06:20.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.772 07:33:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.772 07:33:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:20.772 07:33:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58183 /var/tmp/spdk-nbd.sock 00:06:20.773 07:33:38 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58183 ']' 00:06:20.773 07:33:38 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.773 07:33:38 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:20.773 07:33:38 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.773 07:33:38 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:20.773 07:33:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.031 07:33:38 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.031 07:33:38 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:21.031 07:33:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.291 Malloc0 00:06:21.291 07:33:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.549 Malloc1 00:06:21.549 07:33:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.549 07:33:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.549 07:33:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.549 07:33:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.549 07:33:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.549 07:33:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.549 07:33:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.549 07:33:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.549 07:33:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.549 07:33:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.549 07:33:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.550 07:33:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.550 07:33:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.550 07:33:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.550 07:33:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.550 07:33:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.809 /dev/nbd0 00:06:21.809 07:33:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.809 07:33:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.809 07:33:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:21.809 07:33:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:21.809 07:33:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:21.809 07:33:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:21.809 07:33:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:21.809 07:33:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:21.809 07:33:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:21.809 07:33:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:21.809 07:33:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.809 1+0 records in 00:06:21.809 1+0 records out 00:06:21.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283502 s, 14.4 MB/s 00:06:21.809 07:33:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.809 07:33:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:21.809 07:33:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.809 07:33:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:21.809 07:33:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:21.809 07:33:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.809 07:33:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.809 07:33:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.072 /dev/nbd1 00:06:22.072 07:33:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.072 07:33:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.072 07:33:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:22.072 07:33:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:22.072 07:33:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:22.072 07:33:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:22.072 07:33:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:22.072 07:33:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:22.072 07:33:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:22.072 07:33:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:22.072 07:33:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.072 1+0 records in 00:06:22.072 1+0 records out 00:06:22.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311974 s, 13.1 MB/s 00:06:22.072 07:33:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.072 07:33:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:22.072 07:33:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.072 07:33:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:22.072 07:33:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:22.072 07:33:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.072 07:33:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.072 07:33:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.072 07:33:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.072 07:33:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.331 07:33:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.331 { 00:06:22.331 "nbd_device": "/dev/nbd0", 00:06:22.331 "bdev_name": "Malloc0" 00:06:22.331 }, 00:06:22.331 { 00:06:22.331 "nbd_device": "/dev/nbd1", 00:06:22.331 "bdev_name": "Malloc1" 00:06:22.331 } 00:06:22.331 ]' 00:06:22.331 07:33:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.332 { 00:06:22.332 "nbd_device": "/dev/nbd0", 00:06:22.332 "bdev_name": "Malloc0" 00:06:22.332 }, 00:06:22.332 { 00:06:22.332 "nbd_device": "/dev/nbd1", 00:06:22.332 "bdev_name": "Malloc1" 00:06:22.332 } 00:06:22.332 ]' 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.332 /dev/nbd1' 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.332 /dev/nbd1' 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.332 256+0 records in 00:06:22.332 256+0 records out 00:06:22.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0079531 s, 132 MB/s 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.332 256+0 records in 00:06:22.332 256+0 records out 00:06:22.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214973 s, 48.8 MB/s 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.332 256+0 records in 00:06:22.332 256+0 records out 00:06:22.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236049 s, 44.4 MB/s 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.332 07:33:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.591 07:33:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.591 07:33:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.591 07:33:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.591 07:33:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.591 07:33:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.591 07:33:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.591 07:33:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.591 07:33:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.591 07:33:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.591 07:33:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.850 07:33:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.850 07:33:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.850 07:33:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.850 07:33:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.850 07:33:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.850 07:33:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.850 07:33:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.850 07:33:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.850 07:33:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.850 07:33:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.109 07:33:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.109 07:33:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.109 07:33:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.109 07:33:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.109 07:33:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.109 07:33:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.109 07:33:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.109 07:33:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.109 07:33:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.109 07:33:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.109 07:33:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.368 07:33:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.368 07:33:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.368 07:33:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.368 07:33:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.369 07:33:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.369 07:33:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.369 07:33:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.369 07:33:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.369 07:33:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.369 07:33:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.369 07:33:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.369 07:33:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.369 07:33:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.643 07:33:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.902 [2024-11-08 07:33:41.615638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.902 [2024-11-08 07:33:41.661701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.902 [2024-11-08 07:33:41.661705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.902 [2024-11-08 07:33:41.704345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.902 [2024-11-08 07:33:41.704411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.902 [2024-11-08 07:33:41.704422] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.201 spdk_app_start Round 2 00:06:27.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.201 07:33:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.201 07:33:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:27.201 07:33:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58183 /var/tmp/spdk-nbd.sock 00:06:27.201 07:33:44 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58183 ']' 00:06:27.201 07:33:44 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.201 07:33:44 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:27.201 07:33:44 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.201 07:33:44 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:27.201 07:33:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.201 07:33:44 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:27.201 07:33:44 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:27.201 07:33:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.201 Malloc0 00:06:27.201 07:33:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.460 Malloc1 00:06:27.460 07:33:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.460 07:33:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.718 /dev/nbd0 00:06:27.718 07:33:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.718 07:33:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.718 07:33:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:27.718 07:33:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:27.718 07:33:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:27.718 07:33:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:27.718 07:33:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:27.718 07:33:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:27.718 07:33:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:27.718 07:33:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:27.718 07:33:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.718 1+0 records in 00:06:27.718 1+0 records out 00:06:27.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325264 s, 12.6 MB/s 00:06:27.718 07:33:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.718 07:33:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:27.718 07:33:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.719 07:33:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:27.719 07:33:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:27.719 07:33:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.719 07:33:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.719 07:33:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.977 /dev/nbd1 00:06:27.977 07:33:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.977 07:33:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.977 07:33:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:27.977 07:33:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:27.977 07:33:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:27.977 07:33:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:27.977 07:33:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:27.977 07:33:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:27.977 07:33:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:27.977 07:33:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:27.977 07:33:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.977 1+0 records in 00:06:27.977 1+0 records out 00:06:27.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253092 s, 16.2 MB/s 00:06:27.977 07:33:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.977 07:33:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:27.977 07:33:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.977 07:33:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:27.977 07:33:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:27.977 07:33:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.977 07:33:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.977 07:33:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.977 07:33:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.977 07:33:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.236 { 00:06:28.236 "nbd_device": "/dev/nbd0", 00:06:28.236 "bdev_name": "Malloc0" 00:06:28.236 }, 00:06:28.236 { 00:06:28.236 "nbd_device": "/dev/nbd1", 00:06:28.236 "bdev_name": "Malloc1" 00:06:28.236 } 00:06:28.236 ]' 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.236 { 00:06:28.236 "nbd_device": "/dev/nbd0", 00:06:28.236 "bdev_name": "Malloc0" 00:06:28.236 }, 00:06:28.236 { 00:06:28.236 "nbd_device": "/dev/nbd1", 00:06:28.236 "bdev_name": "Malloc1" 00:06:28.236 } 00:06:28.236 ]' 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.236 /dev/nbd1' 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.236 /dev/nbd1' 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.236 256+0 records in 00:06:28.236 256+0 records out 00:06:28.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464312 s, 226 MB/s 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.236 256+0 records in 00:06:28.236 256+0 records out 00:06:28.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217135 s, 48.3 MB/s 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.236 07:33:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.496 256+0 records in 00:06:28.496 256+0 records out 00:06:28.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241001 s, 43.5 MB/s 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.496 07:33:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.754 07:33:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.754 07:33:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.754 07:33:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.754 07:33:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.013 07:33:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.013 07:33:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.013 07:33:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.013 07:33:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.013 07:33:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.013 07:33:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.013 07:33:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.013 07:33:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.013 07:33:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.013 07:33:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.013 07:33:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.273 07:33:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.273 07:33:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.273 07:33:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.273 07:33:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.273 07:33:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.273 07:33:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.273 07:33:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.273 07:33:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.273 07:33:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.273 07:33:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.273 07:33:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.273 07:33:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.273 07:33:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.532 07:33:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.532 [2024-11-08 07:33:47.460783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.792 [2024-11-08 07:33:47.510460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.792 [2024-11-08 07:33:47.510465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.792 [2024-11-08 07:33:47.551986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.792 [2024-11-08 07:33:47.552065] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.792 [2024-11-08 07:33:47.552076] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.075 07:33:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58183 /var/tmp/spdk-nbd.sock 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58183 ']' 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:33.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:33.075 07:33:50 event.app_repeat -- event/event.sh@39 -- # killprocess 58183 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58183 ']' 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58183 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58183 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:33.075 killing process with pid 58183 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58183' 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58183 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58183 00:06:33.075 spdk_app_start is called in Round 0. 00:06:33.075 Shutdown signal received, stop current app iteration 00:06:33.075 Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 reinitialization... 00:06:33.075 spdk_app_start is called in Round 1. 00:06:33.075 Shutdown signal received, stop current app iteration 00:06:33.075 Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 reinitialization... 00:06:33.075 spdk_app_start is called in Round 2. 00:06:33.075 Shutdown signal received, stop current app iteration 00:06:33.075 Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 reinitialization... 00:06:33.075 spdk_app_start is called in Round 3. 00:06:33.075 Shutdown signal received, stop current app iteration 00:06:33.075 07:33:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:33.075 07:33:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:33.075 00:06:33.075 real 0m18.125s 00:06:33.075 user 0m40.448s 00:06:33.075 sys 0m3.257s 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:33.075 ************************************ 00:06:33.075 07:33:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.075 END TEST app_repeat 00:06:33.075 ************************************ 00:06:33.075 07:33:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:33.075 07:33:50 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:33.075 07:33:50 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:33.075 07:33:50 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:33.075 07:33:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.075 ************************************ 00:06:33.075 START TEST cpu_locks 00:06:33.075 ************************************ 00:06:33.075 07:33:50 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:33.075 * Looking for test storage... 00:06:33.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:33.075 07:33:50 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:33.075 07:33:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:33.075 07:33:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:33.075 07:33:51 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:33.075 07:33:51 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.075 07:33:51 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.075 07:33:51 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.075 07:33:51 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.075 07:33:51 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.075 07:33:51 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.076 07:33:51 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:33.076 07:33:51 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.076 07:33:51 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:33.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.076 --rc genhtml_branch_coverage=1 00:06:33.076 --rc genhtml_function_coverage=1 00:06:33.076 --rc genhtml_legend=1 00:06:33.076 --rc geninfo_all_blocks=1 00:06:33.076 --rc geninfo_unexecuted_blocks=1 00:06:33.076 00:06:33.076 ' 00:06:33.076 07:33:51 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:33.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.076 --rc genhtml_branch_coverage=1 00:06:33.076 --rc genhtml_function_coverage=1 00:06:33.076 --rc genhtml_legend=1 00:06:33.076 --rc geninfo_all_blocks=1 00:06:33.076 --rc geninfo_unexecuted_blocks=1 00:06:33.076 00:06:33.076 ' 00:06:33.076 07:33:51 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:33.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.076 --rc genhtml_branch_coverage=1 00:06:33.076 --rc genhtml_function_coverage=1 00:06:33.076 --rc genhtml_legend=1 00:06:33.076 --rc geninfo_all_blocks=1 00:06:33.076 --rc geninfo_unexecuted_blocks=1 00:06:33.076 00:06:33.076 ' 00:06:33.076 07:33:51 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:33.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.076 --rc genhtml_branch_coverage=1 00:06:33.076 --rc genhtml_function_coverage=1 00:06:33.076 --rc genhtml_legend=1 00:06:33.076 --rc geninfo_all_blocks=1 00:06:33.076 --rc geninfo_unexecuted_blocks=1 00:06:33.076 00:06:33.076 ' 00:06:33.076 07:33:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:33.076 07:33:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:33.076 07:33:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:33.076 07:33:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:33.076 07:33:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:33.076 07:33:51 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:33.076 07:33:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.334 ************************************ 00:06:33.334 START TEST default_locks 00:06:33.334 ************************************ 00:06:33.334 07:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:33.334 07:33:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58617 00:06:33.334 07:33:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58617 00:06:33.334 07:33:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.334 07:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58617 ']' 00:06:33.334 07:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.334 07:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:33.334 07:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.334 07:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:33.334 07:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.334 [2024-11-08 07:33:51.083936] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:33.334 [2024-11-08 07:33:51.084025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58617 ] 00:06:33.334 [2024-11-08 07:33:51.225966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.334 [2024-11-08 07:33:51.274611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.601 [2024-11-08 07:33:51.329772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.602 07:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.602 07:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:33.602 07:33:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58617 00:06:33.602 07:33:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58617 00:06:33.602 07:33:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.220 07:33:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58617 00:06:34.220 07:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58617 ']' 00:06:34.220 07:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58617 00:06:34.220 07:33:51 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:34.220 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:34.220 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58617 00:06:34.220 killing process with pid 58617 00:06:34.220 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:34.220 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:34.220 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58617' 00:06:34.220 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58617 00:06:34.220 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58617 00:06:34.478 07:33:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58617 00:06:34.478 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:34.478 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58617 00:06:34.478 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:34.478 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.478 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58617 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58617 ']' 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:34.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.479 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58617) - No such process 00:06:34.479 ERROR: process (pid: 58617) is no longer running 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:34.479 00:06:34.479 real 0m1.318s 00:06:34.479 user 0m1.335s 00:06:34.479 sys 0m0.547s 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:34.479 07:33:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.479 ************************************ 00:06:34.479 END TEST default_locks 00:06:34.479 ************************************ 00:06:34.479 07:33:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:34.479 07:33:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:34.479 07:33:52 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:34.479 07:33:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.479 ************************************ 00:06:34.479 START TEST default_locks_via_rpc 00:06:34.479 ************************************ 00:06:34.479 07:33:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:34.479 07:33:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58656 00:06:34.479 07:33:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58656 00:06:34.479 07:33:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58656 ']' 00:06:34.479 07:33:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.479 07:33:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.479 07:33:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:34.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.479 07:33:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.479 07:33:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:34.479 07:33:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.737 [2024-11-08 07:33:52.482414] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:34.737 [2024-11-08 07:33:52.482520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58656 ] 00:06:34.737 [2024-11-08 07:33:52.625170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.737 [2024-11-08 07:33:52.688879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.996 [2024-11-08 07:33:52.744106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58656 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58656 00:06:35.562 07:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.128 07:33:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58656 00:06:36.128 07:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58656 ']' 00:06:36.128 07:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58656 00:06:36.128 07:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:36.128 07:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:36.128 07:33:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58656 00:06:36.128 killing process with pid 58656 00:06:36.128 07:33:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:36.128 07:33:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:36.128 07:33:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58656' 00:06:36.128 07:33:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58656 00:06:36.128 07:33:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58656 00:06:36.386 00:06:36.386 real 0m1.917s 00:06:36.386 user 0m2.145s 00:06:36.386 sys 0m0.597s 00:06:36.386 ************************************ 00:06:36.386 END TEST default_locks_via_rpc 00:06:36.386 ************************************ 00:06:36.386 07:33:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.386 07:33:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.644 07:33:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:36.644 07:33:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:36.644 07:33:54 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.644 07:33:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.644 ************************************ 00:06:36.644 START TEST non_locking_app_on_locked_coremask 00:06:36.644 ************************************ 00:06:36.644 07:33:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:36.644 07:33:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58707 00:06:36.644 07:33:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58707 /var/tmp/spdk.sock 00:06:36.644 07:33:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58707 ']' 00:06:36.645 07:33:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.645 07:33:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.645 07:33:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:36.645 07:33:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.645 07:33:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:36.645 07:33:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.645 [2024-11-08 07:33:54.464006] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:36.645 [2024-11-08 07:33:54.464108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58707 ] 00:06:36.903 [2024-11-08 07:33:54.615055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.903 [2024-11-08 07:33:54.668865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.903 [2024-11-08 07:33:54.724718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.838 07:33:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:37.838 07:33:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:37.838 07:33:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58723 00:06:37.838 07:33:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58723 /var/tmp/spdk2.sock 00:06:37.838 07:33:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:37.838 07:33:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58723 ']' 00:06:37.838 07:33:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.838 07:33:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:37.838 07:33:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.838 07:33:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:37.838 07:33:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.838 [2024-11-08 07:33:55.496348] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:37.838 [2024-11-08 07:33:55.496452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58723 ] 00:06:37.838 [2024-11-08 07:33:55.655131] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.838 [2024-11-08 07:33:55.655183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.838 [2024-11-08 07:33:55.765654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.095 [2024-11-08 07:33:55.877536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.662 07:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:38.662 07:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:38.662 07:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58707 00:06:38.662 07:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58707 00:06:38.662 07:33:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.597 07:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58707 00:06:39.597 07:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58707 ']' 00:06:39.597 07:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58707 00:06:39.597 07:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:39.597 07:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:39.597 07:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58707 00:06:39.597 killing process with pid 58707 00:06:39.597 07:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:39.597 07:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:39.597 07:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58707' 00:06:39.597 07:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58707 00:06:39.597 07:33:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58707 00:06:40.165 07:33:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58723 00:06:40.165 07:33:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58723 ']' 00:06:40.165 07:33:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58723 00:06:40.165 07:33:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:40.165 07:33:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:40.165 07:33:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58723 00:06:40.165 killing process with pid 58723 00:06:40.165 07:33:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:40.165 07:33:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:40.165 07:33:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58723' 00:06:40.165 07:33:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58723 00:06:40.165 07:33:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58723 00:06:40.424 ************************************ 00:06:40.424 END TEST non_locking_app_on_locked_coremask 00:06:40.424 ************************************ 00:06:40.424 00:06:40.424 real 0m3.963s 00:06:40.424 user 0m4.439s 00:06:40.424 sys 0m1.185s 00:06:40.424 07:33:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:40.424 07:33:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.683 07:33:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:40.683 07:33:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:40.683 07:33:58 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.683 07:33:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.683 ************************************ 00:06:40.683 START TEST locking_app_on_unlocked_coremask 00:06:40.683 ************************************ 00:06:40.683 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:40.683 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58790 00:06:40.683 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58790 /var/tmp/spdk.sock 00:06:40.683 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:40.683 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58790 ']' 00:06:40.683 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.683 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:40.683 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.683 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:40.683 07:33:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.683 [2024-11-08 07:33:58.490001] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:40.683 [2024-11-08 07:33:58.490098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58790 ] 00:06:40.683 [2024-11-08 07:33:58.639512] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.683 [2024-11-08 07:33:58.639556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.942 [2024-11-08 07:33:58.692269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.942 [2024-11-08 07:33:58.747738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.510 07:33:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:41.510 07:33:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:41.510 07:33:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.510 07:33:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58806 00:06:41.510 07:33:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58806 /var/tmp/spdk2.sock 00:06:41.510 07:33:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58806 ']' 00:06:41.510 07:33:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.510 07:33:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:41.510 07:33:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.510 07:33:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:41.510 07:33:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.768 [2024-11-08 07:33:59.505203] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:41.768 [2024-11-08 07:33:59.505661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58806 ] 00:06:41.768 [2024-11-08 07:33:59.654352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.028 [2024-11-08 07:33:59.759278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.028 [2024-11-08 07:33:59.875577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.595 07:34:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:42.595 07:34:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:42.595 07:34:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58806 00:06:42.595 07:34:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58806 00:06:42.595 07:34:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.162 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58790 00:06:43.162 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58790 ']' 00:06:43.162 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58790 00:06:43.162 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:43.162 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:43.162 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58790 00:06:43.420 killing process with pid 58790 00:06:43.420 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:43.420 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:43.420 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58790' 00:06:43.420 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58790 00:06:43.420 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58790 00:06:43.988 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58806 00:06:43.988 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58806 ']' 00:06:43.988 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58806 00:06:43.988 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:43.988 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:43.988 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58806 00:06:43.988 killing process with pid 58806 00:06:43.988 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:43.989 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:43.989 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58806' 00:06:43.989 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58806 00:06:43.989 07:34:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58806 00:06:44.248 00:06:44.248 real 0m3.681s 00:06:44.248 user 0m4.079s 00:06:44.248 sys 0m1.034s 00:06:44.248 07:34:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.248 07:34:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.248 ************************************ 00:06:44.248 END TEST locking_app_on_unlocked_coremask 00:06:44.248 ************************************ 00:06:44.248 07:34:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:44.248 07:34:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.248 07:34:02 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.248 07:34:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.248 ************************************ 00:06:44.248 START TEST locking_app_on_locked_coremask 00:06:44.248 ************************************ 00:06:44.248 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:44.248 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58873 00:06:44.248 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58873 /var/tmp/spdk.sock 00:06:44.248 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58873 ']' 00:06:44.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.248 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.248 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:44.248 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.248 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.248 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:44.248 07:34:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.507 [2024-11-08 07:34:02.223473] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:44.507 [2024-11-08 07:34:02.223580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58873 ] 00:06:44.507 [2024-11-08 07:34:02.380258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.507 [2024-11-08 07:34:02.443259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.766 [2024-11-08 07:34:02.509239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58889 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58889 /var/tmp/spdk2.sock 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58889 /var/tmp/spdk2.sock 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58889 /var/tmp/spdk2.sock 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58889 ']' 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:45.335 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.335 [2024-11-08 07:34:03.255614] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:45.335 [2024-11-08 07:34:03.256130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58889 ] 00:06:45.595 [2024-11-08 07:34:03.412298] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58873 has claimed it. 00:06:45.595 [2024-11-08 07:34:03.412355] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.207 ERROR: process (pid: 58889) is no longer running 00:06:46.207 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58889) - No such process 00:06:46.207 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:46.207 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:46.207 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:46.207 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.207 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:46.207 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.207 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58873 00:06:46.207 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.207 07:34:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58873 00:06:46.465 07:34:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58873 00:06:46.465 07:34:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58873 ']' 00:06:46.465 07:34:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58873 00:06:46.465 07:34:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:46.465 07:34:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:46.465 07:34:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58873 00:06:46.465 killing process with pid 58873 00:06:46.465 07:34:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:46.465 07:34:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:46.465 07:34:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58873' 00:06:46.465 07:34:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58873 00:06:46.465 07:34:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58873 00:06:46.724 00:06:46.724 real 0m2.491s 00:06:46.724 user 0m2.850s 00:06:46.724 sys 0m0.627s 00:06:46.724 07:34:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:46.724 07:34:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.724 ************************************ 00:06:46.724 END TEST locking_app_on_locked_coremask 00:06:46.724 ************************************ 00:06:46.982 07:34:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:46.982 07:34:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:46.982 07:34:04 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:46.982 07:34:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.982 ************************************ 00:06:46.982 START TEST locking_overlapped_coremask 00:06:46.982 ************************************ 00:06:46.982 07:34:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:46.982 07:34:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58929 00:06:46.982 07:34:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58929 /var/tmp/spdk.sock 00:06:46.982 07:34:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:46.982 07:34:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58929 ']' 00:06:46.982 07:34:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.982 07:34:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:46.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.982 07:34:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.982 07:34:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:46.982 07:34:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.982 [2024-11-08 07:34:04.779440] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:46.982 [2024-11-08 07:34:04.779554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58929 ] 00:06:46.982 [2024-11-08 07:34:04.934818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.241 [2024-11-08 07:34:04.990204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.241 [2024-11-08 07:34:04.990342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.241 [2024-11-08 07:34:04.990343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.241 [2024-11-08 07:34:05.046350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58947 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58947 /var/tmp/spdk2.sock 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58947 /var/tmp/spdk2.sock 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58947 /var/tmp/spdk2.sock 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58947 ']' 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:48.177 07:34:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.177 [2024-11-08 07:34:05.845806] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:48.177 [2024-11-08 07:34:05.846483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58947 ] 00:06:48.177 [2024-11-08 07:34:05.996449] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58929 has claimed it. 00:06:48.177 [2024-11-08 07:34:05.996519] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.755 ERROR: process (pid: 58947) is no longer running 00:06:48.755 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58947) - No such process 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58929 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 58929 ']' 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 58929 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58929 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58929' 00:06:48.755 killing process with pid 58929 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 58929 00:06:48.755 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 58929 00:06:49.013 00:06:49.013 real 0m2.205s 00:06:49.013 user 0m6.326s 00:06:49.013 sys 0m0.400s 00:06:49.013 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:49.013 07:34:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.013 ************************************ 00:06:49.013 END TEST locking_overlapped_coremask 00:06:49.013 ************************************ 00:06:49.014 07:34:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:49.014 07:34:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:49.014 07:34:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:49.014 07:34:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.014 ************************************ 00:06:49.014 START TEST locking_overlapped_coremask_via_rpc 00:06:49.014 ************************************ 00:06:49.014 07:34:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:49.014 07:34:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58993 00:06:49.014 07:34:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58993 /var/tmp/spdk.sock 00:06:49.014 07:34:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58993 ']' 00:06:49.014 07:34:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:49.014 07:34:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.014 07:34:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:49.014 07:34:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.014 07:34:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:49.014 07:34:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.272 [2024-11-08 07:34:07.035756] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:49.272 [2024-11-08 07:34:07.035869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58993 ] 00:06:49.272 [2024-11-08 07:34:07.189826] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.272 [2024-11-08 07:34:07.189867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.531 [2024-11-08 07:34:07.241954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.531 [2024-11-08 07:34:07.242130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.531 [2024-11-08 07:34:07.242129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.531 [2024-11-08 07:34:07.298437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.531 07:34:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:49.531 07:34:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:49.531 07:34:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59003 00:06:49.531 07:34:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59003 /var/tmp/spdk2.sock 00:06:49.531 07:34:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:49.531 07:34:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59003 ']' 00:06:49.531 07:34:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.531 07:34:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:49.531 07:34:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.531 07:34:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:49.531 07:34:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.790 [2024-11-08 07:34:07.527671] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:49.790 [2024-11-08 07:34:07.527770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59003 ] 00:06:49.790 [2024-11-08 07:34:07.688867] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.790 [2024-11-08 07:34:07.688915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.048 [2024-11-08 07:34:07.798187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.048 [2024-11-08 07:34:07.802203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.048 [2024-11-08 07:34:07.802207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:50.048 [2024-11-08 07:34:07.917381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.615 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.615 [2024-11-08 07:34:08.504093] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58993 has claimed it. 00:06:50.615 request: 00:06:50.615 { 00:06:50.615 "method": "framework_enable_cpumask_locks", 00:06:50.615 "req_id": 1 00:06:50.615 } 00:06:50.615 Got JSON-RPC error response 00:06:50.615 response: 00:06:50.615 { 00:06:50.615 "code": -32603, 00:06:50.615 "message": "Failed to claim CPU core: 2" 00:06:50.616 } 00:06:50.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.616 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:50.616 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:50.616 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.616 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.616 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.616 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58993 /var/tmp/spdk.sock 00:06:50.616 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58993 ']' 00:06:50.616 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.616 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.616 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.616 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.616 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.874 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:50.874 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:50.874 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59003 /var/tmp/spdk2.sock 00:06:50.874 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59003 ']' 00:06:50.874 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.874 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.874 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.874 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.874 07:34:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.133 07:34:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:51.133 07:34:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:51.133 07:34:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:51.133 07:34:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.133 07:34:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.133 07:34:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.133 00:06:51.133 real 0m2.068s 00:06:51.133 user 0m1.157s 00:06:51.133 sys 0m0.162s 00:06:51.133 ************************************ 00:06:51.133 END TEST locking_overlapped_coremask_via_rpc 00:06:51.133 ************************************ 00:06:51.133 07:34:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.133 07:34:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.133 07:34:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:51.133 07:34:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58993 ]] 00:06:51.133 07:34:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58993 00:06:51.133 07:34:09 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58993 ']' 00:06:51.133 07:34:09 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58993 00:06:51.133 07:34:09 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:51.133 07:34:09 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:51.133 07:34:09 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58993 00:06:51.391 killing process with pid 58993 00:06:51.391 07:34:09 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:51.391 07:34:09 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:51.391 07:34:09 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58993' 00:06:51.391 07:34:09 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58993 00:06:51.391 07:34:09 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58993 00:06:51.651 07:34:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59003 ]] 00:06:51.651 07:34:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59003 00:06:51.651 07:34:09 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59003 ']' 00:06:51.651 07:34:09 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59003 00:06:51.651 07:34:09 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:51.651 07:34:09 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:51.651 07:34:09 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59003 00:06:51.651 killing process with pid 59003 00:06:51.651 07:34:09 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:51.651 07:34:09 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:51.651 07:34:09 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59003' 00:06:51.651 07:34:09 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59003 00:06:51.651 07:34:09 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59003 00:06:51.920 07:34:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.920 Process with pid 58993 is not found 00:06:51.920 07:34:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:51.920 07:34:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58993 ]] 00:06:51.920 07:34:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58993 00:06:51.920 07:34:09 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58993 ']' 00:06:51.920 07:34:09 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58993 00:06:51.920 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58993) - No such process 00:06:51.920 07:34:09 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58993 is not found' 00:06:51.920 07:34:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59003 ]] 00:06:51.920 07:34:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59003 00:06:51.920 07:34:09 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59003 ']' 00:06:51.920 Process with pid 59003 is not found 00:06:51.920 07:34:09 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59003 00:06:51.920 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59003) - No such process 00:06:51.920 07:34:09 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59003 is not found' 00:06:51.920 07:34:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.920 ************************************ 00:06:51.920 END TEST cpu_locks 00:06:51.920 ************************************ 00:06:51.920 00:06:51.920 real 0m18.974s 00:06:51.920 user 0m33.058s 00:06:51.920 sys 0m5.407s 00:06:51.920 07:34:09 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.920 07:34:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.920 ************************************ 00:06:51.920 END TEST event 00:06:51.920 ************************************ 00:06:51.920 00:06:51.920 real 0m46.173s 00:06:51.920 user 1m29.193s 00:06:51.920 sys 0m9.567s 00:06:51.920 07:34:09 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.920 07:34:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.179 07:34:09 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:52.179 07:34:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:52.179 07:34:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.179 07:34:09 -- common/autotest_common.sh@10 -- # set +x 00:06:52.179 ************************************ 00:06:52.179 START TEST thread 00:06:52.179 ************************************ 00:06:52.179 07:34:09 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:52.179 * Looking for test storage... 00:06:52.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:52.179 07:34:10 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:52.179 07:34:10 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:52.179 07:34:10 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:52.179 07:34:10 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:52.179 07:34:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.179 07:34:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.179 07:34:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.179 07:34:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.179 07:34:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.179 07:34:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.179 07:34:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.179 07:34:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.179 07:34:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.179 07:34:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.179 07:34:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.179 07:34:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:52.179 07:34:10 thread -- scripts/common.sh@345 -- # : 1 00:06:52.179 07:34:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.179 07:34:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.179 07:34:10 thread -- scripts/common.sh@365 -- # decimal 1 00:06:52.179 07:34:10 thread -- scripts/common.sh@353 -- # local d=1 00:06:52.179 07:34:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.179 07:34:10 thread -- scripts/common.sh@355 -- # echo 1 00:06:52.179 07:34:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.179 07:34:10 thread -- scripts/common.sh@366 -- # decimal 2 00:06:52.179 07:34:10 thread -- scripts/common.sh@353 -- # local d=2 00:06:52.179 07:34:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.179 07:34:10 thread -- scripts/common.sh@355 -- # echo 2 00:06:52.179 07:34:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.179 07:34:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.179 07:34:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.179 07:34:10 thread -- scripts/common.sh@368 -- # return 0 00:06:52.179 07:34:10 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.179 07:34:10 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:52.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.179 --rc genhtml_branch_coverage=1 00:06:52.179 --rc genhtml_function_coverage=1 00:06:52.179 --rc genhtml_legend=1 00:06:52.179 --rc geninfo_all_blocks=1 00:06:52.179 --rc geninfo_unexecuted_blocks=1 00:06:52.179 00:06:52.179 ' 00:06:52.179 07:34:10 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:52.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.179 --rc genhtml_branch_coverage=1 00:06:52.179 --rc genhtml_function_coverage=1 00:06:52.179 --rc genhtml_legend=1 00:06:52.179 --rc geninfo_all_blocks=1 00:06:52.179 --rc geninfo_unexecuted_blocks=1 00:06:52.179 00:06:52.179 ' 00:06:52.179 07:34:10 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:52.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.179 --rc genhtml_branch_coverage=1 00:06:52.179 --rc genhtml_function_coverage=1 00:06:52.179 --rc genhtml_legend=1 00:06:52.179 --rc geninfo_all_blocks=1 00:06:52.179 --rc geninfo_unexecuted_blocks=1 00:06:52.179 00:06:52.179 ' 00:06:52.179 07:34:10 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:52.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.179 --rc genhtml_branch_coverage=1 00:06:52.179 --rc genhtml_function_coverage=1 00:06:52.179 --rc genhtml_legend=1 00:06:52.180 --rc geninfo_all_blocks=1 00:06:52.180 --rc geninfo_unexecuted_blocks=1 00:06:52.180 00:06:52.180 ' 00:06:52.180 07:34:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.180 07:34:10 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:52.180 07:34:10 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.180 07:34:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.180 ************************************ 00:06:52.180 START TEST thread_poller_perf 00:06:52.180 ************************************ 00:06:52.180 07:34:10 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.180 [2024-11-08 07:34:10.113182] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:52.180 [2024-11-08 07:34:10.113285] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59134 ] 00:06:52.438 [2024-11-08 07:34:10.263108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.438 [2024-11-08 07:34:10.316509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.438 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:53.817 [2024-11-08T07:34:11.778Z] ====================================== 00:06:53.817 [2024-11-08T07:34:11.778Z] busy:2109304602 (cyc) 00:06:53.817 [2024-11-08T07:34:11.778Z] total_run_count: 394000 00:06:53.817 [2024-11-08T07:34:11.778Z] tsc_hz: 2100000000 (cyc) 00:06:53.817 [2024-11-08T07:34:11.778Z] ====================================== 00:06:53.817 [2024-11-08T07:34:11.778Z] poller_cost: 5353 (cyc), 2549 (nsec) 00:06:53.817 00:06:53.817 real 0m1.272s 00:06:53.817 user 0m1.117s 00:06:53.817 sys 0m0.048s 00:06:53.817 07:34:11 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:53.817 07:34:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.817 ************************************ 00:06:53.817 END TEST thread_poller_perf 00:06:53.817 ************************************ 00:06:53.817 07:34:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.817 07:34:11 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:53.817 07:34:11 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:53.817 07:34:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.817 ************************************ 00:06:53.817 START TEST thread_poller_perf 00:06:53.817 ************************************ 00:06:53.817 07:34:11 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.817 [2024-11-08 07:34:11.447140] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:53.817 [2024-11-08 07:34:11.447836] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59169 ] 00:06:53.817 [2024-11-08 07:34:11.598967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.817 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:53.817 [2024-11-08 07:34:11.651615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.751 [2024-11-08T07:34:12.712Z] ====================================== 00:06:54.751 [2024-11-08T07:34:12.712Z] busy:2101600340 (cyc) 00:06:54.751 [2024-11-08T07:34:12.712Z] total_run_count: 5238000 00:06:54.751 [2024-11-08T07:34:12.712Z] tsc_hz: 2100000000 (cyc) 00:06:54.751 [2024-11-08T07:34:12.712Z] ====================================== 00:06:54.751 [2024-11-08T07:34:12.712Z] poller_cost: 401 (cyc), 190 (nsec) 00:06:54.751 00:06:54.751 real 0m1.272s 00:06:54.751 user 0m1.114s 00:06:54.751 sys 0m0.052s 00:06:54.751 07:34:12 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.751 ************************************ 00:06:54.751 END TEST thread_poller_perf 00:06:54.751 ************************************ 00:06:54.751 07:34:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.010 07:34:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:55.010 00:06:55.010 real 0m2.838s 00:06:55.010 user 0m2.361s 00:06:55.010 sys 0m0.270s 00:06:55.010 ************************************ 00:06:55.010 END TEST thread 00:06:55.010 ************************************ 00:06:55.010 07:34:12 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.010 07:34:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.010 07:34:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:55.010 07:34:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.010 07:34:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:55.010 07:34:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.010 07:34:12 -- common/autotest_common.sh@10 -- # set +x 00:06:55.010 ************************************ 00:06:55.010 START TEST app_cmdline 00:06:55.010 ************************************ 00:06:55.010 07:34:12 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.010 * Looking for test storage... 00:06:55.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:55.010 07:34:12 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:55.010 07:34:12 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:55.010 07:34:12 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:55.268 07:34:12 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:55.268 07:34:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.268 07:34:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.268 07:34:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.268 07:34:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.268 07:34:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.268 07:34:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.268 07:34:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.268 07:34:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.268 07:34:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.268 07:34:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.268 07:34:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.268 07:34:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:55.269 07:34:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:55.269 07:34:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.269 07:34:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.269 07:34:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:55.269 07:34:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:55.269 07:34:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.269 07:34:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:55.269 07:34:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.269 07:34:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:55.269 07:34:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:55.269 07:34:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.269 07:34:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:55.269 07:34:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.269 07:34:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.269 07:34:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.269 07:34:13 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:55.269 07:34:13 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.269 07:34:13 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:55.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.269 --rc genhtml_branch_coverage=1 00:06:55.269 --rc genhtml_function_coverage=1 00:06:55.269 --rc genhtml_legend=1 00:06:55.269 --rc geninfo_all_blocks=1 00:06:55.269 --rc geninfo_unexecuted_blocks=1 00:06:55.269 00:06:55.269 ' 00:06:55.269 07:34:13 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:55.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.269 --rc genhtml_branch_coverage=1 00:06:55.269 --rc genhtml_function_coverage=1 00:06:55.269 --rc genhtml_legend=1 00:06:55.269 --rc geninfo_all_blocks=1 00:06:55.269 --rc geninfo_unexecuted_blocks=1 00:06:55.269 00:06:55.269 ' 00:06:55.269 07:34:13 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:55.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.269 --rc genhtml_branch_coverage=1 00:06:55.269 --rc genhtml_function_coverage=1 00:06:55.269 --rc genhtml_legend=1 00:06:55.269 --rc geninfo_all_blocks=1 00:06:55.269 --rc geninfo_unexecuted_blocks=1 00:06:55.269 00:06:55.269 ' 00:06:55.269 07:34:13 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:55.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.269 --rc genhtml_branch_coverage=1 00:06:55.269 --rc genhtml_function_coverage=1 00:06:55.269 --rc genhtml_legend=1 00:06:55.269 --rc geninfo_all_blocks=1 00:06:55.269 --rc geninfo_unexecuted_blocks=1 00:06:55.269 00:06:55.269 ' 00:06:55.269 07:34:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:55.269 07:34:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59246 00:06:55.269 07:34:13 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:55.269 07:34:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59246 00:06:55.269 07:34:13 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59246 ']' 00:06:55.269 07:34:13 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.269 07:34:13 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:55.269 07:34:13 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.269 07:34:13 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:55.269 07:34:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.269 [2024-11-08 07:34:13.070741] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:55.269 [2024-11-08 07:34:13.071537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59246 ] 00:06:55.269 [2024-11-08 07:34:13.220519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.527 [2024-11-08 07:34:13.273183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.527 [2024-11-08 07:34:13.330975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.093 07:34:14 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:56.093 07:34:14 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:56.093 07:34:14 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:56.351 { 00:06:56.351 "version": "SPDK v25.01-pre git sha1 e729adafb", 00:06:56.351 "fields": { 00:06:56.352 "major": 25, 00:06:56.352 "minor": 1, 00:06:56.352 "patch": 0, 00:06:56.352 "suffix": "-pre", 00:06:56.352 "commit": "e729adafb" 00:06:56.352 } 00:06:56.352 } 00:06:56.352 07:34:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:56.352 07:34:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:56.610 07:34:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:56.610 07:34:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:56.610 07:34:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:56.610 07:34:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:56.610 07:34:14 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.610 07:34:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.610 07:34:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:56.610 07:34:14 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.610 07:34:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:56.610 07:34:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:56.610 07:34:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.610 07:34:14 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:56.610 07:34:14 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.610 07:34:14 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.610 07:34:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.610 07:34:14 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.610 07:34:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.610 07:34:14 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.610 07:34:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.610 07:34:14 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.610 07:34:14 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:56.610 07:34:14 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.869 request: 00:06:56.869 { 00:06:56.869 "method": "env_dpdk_get_mem_stats", 00:06:56.869 "req_id": 1 00:06:56.869 } 00:06:56.869 Got JSON-RPC error response 00:06:56.869 response: 00:06:56.869 { 00:06:56.869 "code": -32601, 00:06:56.869 "message": "Method not found" 00:06:56.869 } 00:06:56.869 07:34:14 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:56.869 07:34:14 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.869 07:34:14 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.869 07:34:14 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.869 07:34:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59246 00:06:56.869 07:34:14 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59246 ']' 00:06:56.869 07:34:14 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59246 00:06:56.869 07:34:14 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:56.869 07:34:14 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:56.869 07:34:14 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59246 00:06:56.869 killing process with pid 59246 00:06:56.869 07:34:14 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:56.869 07:34:14 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:56.869 07:34:14 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59246' 00:06:56.869 07:34:14 app_cmdline -- common/autotest_common.sh@971 -- # kill 59246 00:06:56.869 07:34:14 app_cmdline -- common/autotest_common.sh@976 -- # wait 59246 00:06:57.140 ************************************ 00:06:57.140 END TEST app_cmdline 00:06:57.140 ************************************ 00:06:57.140 00:06:57.140 real 0m2.160s 00:06:57.140 user 0m2.676s 00:06:57.140 sys 0m0.501s 00:06:57.140 07:34:14 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.140 07:34:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.140 07:34:15 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:57.140 07:34:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:57.140 07:34:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.140 07:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:57.140 ************************************ 00:06:57.140 START TEST version 00:06:57.140 ************************************ 00:06:57.140 07:34:15 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:57.140 * Looking for test storage... 00:06:57.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:57.140 07:34:15 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:57.398 07:34:15 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:57.398 07:34:15 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:57.398 07:34:15 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:57.398 07:34:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.398 07:34:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.398 07:34:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.398 07:34:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.398 07:34:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.398 07:34:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.398 07:34:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.398 07:34:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.398 07:34:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.398 07:34:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.398 07:34:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.398 07:34:15 version -- scripts/common.sh@344 -- # case "$op" in 00:06:57.398 07:34:15 version -- scripts/common.sh@345 -- # : 1 00:06:57.398 07:34:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.398 07:34:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.398 07:34:15 version -- scripts/common.sh@365 -- # decimal 1 00:06:57.398 07:34:15 version -- scripts/common.sh@353 -- # local d=1 00:06:57.398 07:34:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.398 07:34:15 version -- scripts/common.sh@355 -- # echo 1 00:06:57.398 07:34:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.398 07:34:15 version -- scripts/common.sh@366 -- # decimal 2 00:06:57.398 07:34:15 version -- scripts/common.sh@353 -- # local d=2 00:06:57.398 07:34:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.398 07:34:15 version -- scripts/common.sh@355 -- # echo 2 00:06:57.398 07:34:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.398 07:34:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.398 07:34:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.398 07:34:15 version -- scripts/common.sh@368 -- # return 0 00:06:57.398 07:34:15 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.398 07:34:15 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:57.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.398 --rc genhtml_branch_coverage=1 00:06:57.398 --rc genhtml_function_coverage=1 00:06:57.398 --rc genhtml_legend=1 00:06:57.398 --rc geninfo_all_blocks=1 00:06:57.398 --rc geninfo_unexecuted_blocks=1 00:06:57.398 00:06:57.398 ' 00:06:57.398 07:34:15 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:57.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.398 --rc genhtml_branch_coverage=1 00:06:57.398 --rc genhtml_function_coverage=1 00:06:57.398 --rc genhtml_legend=1 00:06:57.398 --rc geninfo_all_blocks=1 00:06:57.398 --rc geninfo_unexecuted_blocks=1 00:06:57.398 00:06:57.398 ' 00:06:57.398 07:34:15 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:57.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.398 --rc genhtml_branch_coverage=1 00:06:57.398 --rc genhtml_function_coverage=1 00:06:57.398 --rc genhtml_legend=1 00:06:57.398 --rc geninfo_all_blocks=1 00:06:57.398 --rc geninfo_unexecuted_blocks=1 00:06:57.398 00:06:57.398 ' 00:06:57.398 07:34:15 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:57.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.398 --rc genhtml_branch_coverage=1 00:06:57.398 --rc genhtml_function_coverage=1 00:06:57.398 --rc genhtml_legend=1 00:06:57.398 --rc geninfo_all_blocks=1 00:06:57.398 --rc geninfo_unexecuted_blocks=1 00:06:57.398 00:06:57.398 ' 00:06:57.398 07:34:15 version -- app/version.sh@17 -- # get_header_version major 00:06:57.398 07:34:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.398 07:34:15 version -- app/version.sh@14 -- # cut -f2 00:06:57.398 07:34:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.398 07:34:15 version -- app/version.sh@17 -- # major=25 00:06:57.398 07:34:15 version -- app/version.sh@18 -- # get_header_version minor 00:06:57.398 07:34:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.398 07:34:15 version -- app/version.sh@14 -- # cut -f2 00:06:57.398 07:34:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.398 07:34:15 version -- app/version.sh@18 -- # minor=1 00:06:57.398 07:34:15 version -- app/version.sh@19 -- # get_header_version patch 00:06:57.398 07:34:15 version -- app/version.sh@14 -- # cut -f2 00:06:57.398 07:34:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.398 07:34:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.398 07:34:15 version -- app/version.sh@19 -- # patch=0 00:06:57.398 07:34:15 version -- app/version.sh@20 -- # get_header_version suffix 00:06:57.398 07:34:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.398 07:34:15 version -- app/version.sh@14 -- # cut -f2 00:06:57.398 07:34:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.398 07:34:15 version -- app/version.sh@20 -- # suffix=-pre 00:06:57.398 07:34:15 version -- app/version.sh@22 -- # version=25.1 00:06:57.398 07:34:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:57.398 07:34:15 version -- app/version.sh@28 -- # version=25.1rc0 00:06:57.398 07:34:15 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:57.398 07:34:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:57.398 07:34:15 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:57.398 07:34:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:57.398 00:06:57.398 real 0m0.232s 00:06:57.398 user 0m0.137s 00:06:57.398 sys 0m0.138s 00:06:57.398 ************************************ 00:06:57.398 END TEST version 00:06:57.398 ************************************ 00:06:57.398 07:34:15 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.398 07:34:15 version -- common/autotest_common.sh@10 -- # set +x 00:06:57.398 07:34:15 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:57.398 07:34:15 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:57.398 07:34:15 -- spdk/autotest.sh@194 -- # uname -s 00:06:57.398 07:34:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:57.398 07:34:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:57.398 07:34:15 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:57.398 07:34:15 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:57.398 07:34:15 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:57.398 07:34:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:57.398 07:34:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.398 07:34:15 -- common/autotest_common.sh@10 -- # set +x 00:06:57.398 ************************************ 00:06:57.398 START TEST spdk_dd 00:06:57.398 ************************************ 00:06:57.398 07:34:15 spdk_dd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:57.657 * Looking for test storage... 00:06:57.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:57.657 07:34:15 spdk_dd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:57.657 07:34:15 spdk_dd -- common/autotest_common.sh@1691 -- # lcov --version 00:06:57.657 07:34:15 spdk_dd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:57.657 07:34:15 spdk_dd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:57.657 07:34:15 spdk_dd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.657 07:34:15 spdk_dd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:57.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.657 --rc genhtml_branch_coverage=1 00:06:57.657 --rc genhtml_function_coverage=1 00:06:57.657 --rc genhtml_legend=1 00:06:57.657 --rc geninfo_all_blocks=1 00:06:57.657 --rc geninfo_unexecuted_blocks=1 00:06:57.657 00:06:57.657 ' 00:06:57.657 07:34:15 spdk_dd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:57.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.657 --rc genhtml_branch_coverage=1 00:06:57.657 --rc genhtml_function_coverage=1 00:06:57.657 --rc genhtml_legend=1 00:06:57.657 --rc geninfo_all_blocks=1 00:06:57.657 --rc geninfo_unexecuted_blocks=1 00:06:57.657 00:06:57.657 ' 00:06:57.657 07:34:15 spdk_dd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:57.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.657 --rc genhtml_branch_coverage=1 00:06:57.657 --rc genhtml_function_coverage=1 00:06:57.657 --rc genhtml_legend=1 00:06:57.657 --rc geninfo_all_blocks=1 00:06:57.657 --rc geninfo_unexecuted_blocks=1 00:06:57.657 00:06:57.657 ' 00:06:57.657 07:34:15 spdk_dd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:57.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.657 --rc genhtml_branch_coverage=1 00:06:57.657 --rc genhtml_function_coverage=1 00:06:57.657 --rc genhtml_legend=1 00:06:57.657 --rc geninfo_all_blocks=1 00:06:57.657 --rc geninfo_unexecuted_blocks=1 00:06:57.657 00:06:57.657 ' 00:06:57.657 07:34:15 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.657 07:34:15 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.657 07:34:15 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.658 07:34:15 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.658 07:34:15 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.658 07:34:15 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:57.658 07:34:15 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.658 07:34:15 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:57.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:57.916 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:57.916 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:58.194 07:34:15 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:58.194 07:34:15 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:58.194 07:34:15 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:58.194 07:34:15 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:58.194 07:34:15 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:58.194 07:34:15 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:58.194 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.194 07:34:15 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.194 07:34:15 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:58.194 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:58.194 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.194 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:58.194 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.194 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:58.194 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:58.195 07:34:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:58.195 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:58.196 * spdk_dd linked to liburing 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:58.196 07:34:16 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:58.196 07:34:16 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:58.196 07:34:16 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:58.196 07:34:16 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:58.196 07:34:16 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:58.196 07:34:16 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:58.196 07:34:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:58.196 ************************************ 00:06:58.196 START TEST spdk_dd_basic_rw 00:06:58.196 ************************************ 00:06:58.196 07:34:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:58.196 * Looking for test storage... 00:06:58.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:58.197 07:34:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:58.197 07:34:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:58.197 07:34:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lcov --version 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:58.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.456 --rc genhtml_branch_coverage=1 00:06:58.456 --rc genhtml_function_coverage=1 00:06:58.456 --rc genhtml_legend=1 00:06:58.456 --rc geninfo_all_blocks=1 00:06:58.456 --rc geninfo_unexecuted_blocks=1 00:06:58.456 00:06:58.456 ' 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:58.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.456 --rc genhtml_branch_coverage=1 00:06:58.456 --rc genhtml_function_coverage=1 00:06:58.456 --rc genhtml_legend=1 00:06:58.456 --rc geninfo_all_blocks=1 00:06:58.456 --rc geninfo_unexecuted_blocks=1 00:06:58.456 00:06:58.456 ' 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:58.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.456 --rc genhtml_branch_coverage=1 00:06:58.456 --rc genhtml_function_coverage=1 00:06:58.456 --rc genhtml_legend=1 00:06:58.456 --rc geninfo_all_blocks=1 00:06:58.456 --rc geninfo_unexecuted_blocks=1 00:06:58.456 00:06:58.456 ' 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:58.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.456 --rc genhtml_branch_coverage=1 00:06:58.456 --rc genhtml_function_coverage=1 00:06:58.456 --rc genhtml_legend=1 00:06:58.456 --rc geninfo_all_blocks=1 00:06:58.456 --rc geninfo_unexecuted_blocks=1 00:06:58.456 00:06:58.456 ' 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:58.456 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:58.457 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:58.457 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:58.457 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.457 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:58.457 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:58.457 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:58.457 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:58.718 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:58.718 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:58.719 ************************************ 00:06:58.719 START TEST dd_bs_lt_native_bs 00:06:58.719 ************************************ 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1127 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.719 07:34:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:58.719 { 00:06:58.719 "subsystems": [ 00:06:58.719 { 00:06:58.719 "subsystem": "bdev", 00:06:58.719 "config": [ 00:06:58.719 { 00:06:58.719 "params": { 00:06:58.719 "trtype": "pcie", 00:06:58.719 "traddr": "0000:00:10.0", 00:06:58.719 "name": "Nvme0" 00:06:58.719 }, 00:06:58.719 "method": "bdev_nvme_attach_controller" 00:06:58.719 }, 00:06:58.719 { 00:06:58.719 "method": "bdev_wait_for_examine" 00:06:58.719 } 00:06:58.719 ] 00:06:58.719 } 00:06:58.719 ] 00:06:58.719 } 00:06:58.719 [2024-11-08 07:34:16.532895] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:58.719 [2024-11-08 07:34:16.533025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59598 ] 00:06:58.978 [2024-11-08 07:34:16.688432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.978 [2024-11-08 07:34:16.749600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.978 [2024-11-08 07:34:16.798201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.978 [2024-11-08 07:34:16.906415] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:58.978 [2024-11-08 07:34:16.906490] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.236 [2024-11-08 07:34:17.010545] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.236 00:06:59.236 real 0m0.595s 00:06:59.236 user 0m0.393s 00:06:59.236 sys 0m0.152s 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.236 ************************************ 00:06:59.236 END TEST dd_bs_lt_native_bs 00:06:59.236 ************************************ 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.236 ************************************ 00:06:59.236 START TEST dd_rw 00:06:59.236 ************************************ 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1127 -- # basic_rw 4096 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:59.236 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.804 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:59.804 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:59.804 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:59.804 07:34:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.804 [2024-11-08 07:34:17.743532] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:06:59.804 [2024-11-08 07:34:17.743603] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59634 ] 00:06:59.804 { 00:06:59.804 "subsystems": [ 00:06:59.804 { 00:06:59.804 "subsystem": "bdev", 00:06:59.804 "config": [ 00:06:59.804 { 00:06:59.804 "params": { 00:06:59.804 "trtype": "pcie", 00:06:59.804 "traddr": "0000:00:10.0", 00:06:59.804 "name": "Nvme0" 00:06:59.804 }, 00:06:59.804 "method": "bdev_nvme_attach_controller" 00:06:59.804 }, 00:06:59.804 { 00:06:59.804 "method": "bdev_wait_for_examine" 00:06:59.804 } 00:06:59.804 ] 00:06:59.804 } 00:06:59.804 ] 00:06:59.804 } 00:07:00.063 [2024-11-08 07:34:17.879849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.063 [2024-11-08 07:34:17.928579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.063 [2024-11-08 07:34:17.970425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.322  [2024-11-08T07:34:18.283Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:00.322 00:07:00.322 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:00.322 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:00.322 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:00.322 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:00.322 [2024-11-08 07:34:18.270581] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:00.322 [2024-11-08 07:34:18.270662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59648 ] 00:07:00.580 { 00:07:00.580 "subsystems": [ 00:07:00.580 { 00:07:00.580 "subsystem": "bdev", 00:07:00.580 "config": [ 00:07:00.580 { 00:07:00.580 "params": { 00:07:00.580 "trtype": "pcie", 00:07:00.580 "traddr": "0000:00:10.0", 00:07:00.580 "name": "Nvme0" 00:07:00.581 }, 00:07:00.581 "method": "bdev_nvme_attach_controller" 00:07:00.581 }, 00:07:00.581 { 00:07:00.581 "method": "bdev_wait_for_examine" 00:07:00.581 } 00:07:00.581 ] 00:07:00.581 } 00:07:00.581 ] 00:07:00.581 } 00:07:00.581 [2024-11-08 07:34:18.408936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.581 [2024-11-08 07:34:18.459131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.581 [2024-11-08 07:34:18.500939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.839  [2024-11-08T07:34:18.800Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:00.839 00:07:00.839 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.839 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:00.839 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:00.839 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:00.839 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:00.839 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:00.839 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:00.839 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:00.839 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:00.839 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:00.839 07:34:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.098 [2024-11-08 07:34:18.807406] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:01.098 [2024-11-08 07:34:18.807479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59663 ] 00:07:01.098 { 00:07:01.098 "subsystems": [ 00:07:01.098 { 00:07:01.098 "subsystem": "bdev", 00:07:01.098 "config": [ 00:07:01.098 { 00:07:01.098 "params": { 00:07:01.098 "trtype": "pcie", 00:07:01.098 "traddr": "0000:00:10.0", 00:07:01.098 "name": "Nvme0" 00:07:01.098 }, 00:07:01.098 "method": "bdev_nvme_attach_controller" 00:07:01.098 }, 00:07:01.098 { 00:07:01.098 "method": "bdev_wait_for_examine" 00:07:01.098 } 00:07:01.098 ] 00:07:01.098 } 00:07:01.098 ] 00:07:01.098 } 00:07:01.098 [2024-11-08 07:34:18.944991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.098 [2024-11-08 07:34:18.991025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.098 [2024-11-08 07:34:19.033103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.357  [2024-11-08T07:34:19.318Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:01.357 00:07:01.357 07:34:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:01.357 07:34:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:01.357 07:34:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:01.357 07:34:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:01.357 07:34:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:01.357 07:34:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:01.357 07:34:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.923 07:34:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:01.923 07:34:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:01.923 07:34:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:01.923 07:34:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.181 { 00:07:02.181 "subsystems": [ 00:07:02.181 { 00:07:02.181 "subsystem": "bdev", 00:07:02.181 "config": [ 00:07:02.181 { 00:07:02.181 "params": { 00:07:02.181 "trtype": "pcie", 00:07:02.181 "traddr": "0000:00:10.0", 00:07:02.181 "name": "Nvme0" 00:07:02.181 }, 00:07:02.181 "method": "bdev_nvme_attach_controller" 00:07:02.181 }, 00:07:02.181 { 00:07:02.181 "method": "bdev_wait_for_examine" 00:07:02.181 } 00:07:02.181 ] 00:07:02.181 } 00:07:02.181 ] 00:07:02.181 } 00:07:02.181 [2024-11-08 07:34:19.907872] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:02.181 [2024-11-08 07:34:19.908162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59682 ] 00:07:02.181 [2024-11-08 07:34:20.058696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.181 [2024-11-08 07:34:20.105456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.439 [2024-11-08 07:34:20.147448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.439  [2024-11-08T07:34:20.659Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:02.698 00:07:02.698 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:02.698 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:02.698 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:02.698 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.698 { 00:07:02.698 "subsystems": [ 00:07:02.698 { 00:07:02.698 "subsystem": "bdev", 00:07:02.698 "config": [ 00:07:02.698 { 00:07:02.698 "params": { 00:07:02.698 "trtype": "pcie", 00:07:02.698 "traddr": "0000:00:10.0", 00:07:02.698 "name": "Nvme0" 00:07:02.698 }, 00:07:02.698 "method": "bdev_nvme_attach_controller" 00:07:02.698 }, 00:07:02.698 { 00:07:02.698 "method": "bdev_wait_for_examine" 00:07:02.698 } 00:07:02.698 ] 00:07:02.698 } 00:07:02.698 ] 00:07:02.698 } 00:07:02.698 [2024-11-08 07:34:20.448544] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:02.698 [2024-11-08 07:34:20.448622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59696 ] 00:07:02.698 [2024-11-08 07:34:20.589318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.698 [2024-11-08 07:34:20.636710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.957 [2024-11-08 07:34:20.678437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.957  [2024-11-08T07:34:21.177Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:03.216 00:07:03.216 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:03.216 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:03.216 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:03.216 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:03.216 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:03.216 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:03.216 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:03.216 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:03.216 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:03.216 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:03.216 07:34:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.216 { 00:07:03.216 "subsystems": [ 00:07:03.216 { 00:07:03.216 "subsystem": "bdev", 00:07:03.216 "config": [ 00:07:03.216 { 00:07:03.216 "params": { 00:07:03.216 "trtype": "pcie", 00:07:03.216 "traddr": "0000:00:10.0", 00:07:03.216 "name": "Nvme0" 00:07:03.216 }, 00:07:03.216 "method": "bdev_nvme_attach_controller" 00:07:03.216 }, 00:07:03.216 { 00:07:03.216 "method": "bdev_wait_for_examine" 00:07:03.216 } 00:07:03.216 ] 00:07:03.216 } 00:07:03.216 ] 00:07:03.216 } 00:07:03.216 [2024-11-08 07:34:21.001706] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:03.216 [2024-11-08 07:34:21.001804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59711 ] 00:07:03.216 [2024-11-08 07:34:21.150428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.475 [2024-11-08 07:34:21.195851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.475 [2024-11-08 07:34:21.237712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.475  [2024-11-08T07:34:21.694Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:03.733 00:07:03.733 07:34:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:03.733 07:34:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:03.733 07:34:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:03.733 07:34:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:03.733 07:34:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:03.733 07:34:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:03.733 07:34:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:03.733 07:34:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.301 07:34:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:04.301 07:34:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:04.301 07:34:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.301 07:34:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.301 { 00:07:04.301 "subsystems": [ 00:07:04.301 { 00:07:04.301 "subsystem": "bdev", 00:07:04.301 "config": [ 00:07:04.301 { 00:07:04.301 "params": { 00:07:04.301 "trtype": "pcie", 00:07:04.301 "traddr": "0000:00:10.0", 00:07:04.301 "name": "Nvme0" 00:07:04.301 }, 00:07:04.301 "method": "bdev_nvme_attach_controller" 00:07:04.301 }, 00:07:04.301 { 00:07:04.301 "method": "bdev_wait_for_examine" 00:07:04.301 } 00:07:04.301 ] 00:07:04.301 } 00:07:04.301 ] 00:07:04.301 } 00:07:04.301 [2024-11-08 07:34:22.084937] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:04.301 [2024-11-08 07:34:22.085235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59730 ] 00:07:04.301 [2024-11-08 07:34:22.235522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.559 [2024-11-08 07:34:22.287759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.559 [2024-11-08 07:34:22.329443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.559  [2024-11-08T07:34:22.779Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:04.818 00:07:04.818 07:34:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:04.818 07:34:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:04.818 07:34:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.818 07:34:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.818 [2024-11-08 07:34:22.641934] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:04.818 [2024-11-08 07:34:22.642266] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59744 ] 00:07:04.818 { 00:07:04.818 "subsystems": [ 00:07:04.818 { 00:07:04.818 "subsystem": "bdev", 00:07:04.818 "config": [ 00:07:04.818 { 00:07:04.818 "params": { 00:07:04.818 "trtype": "pcie", 00:07:04.818 "traddr": "0000:00:10.0", 00:07:04.818 "name": "Nvme0" 00:07:04.818 }, 00:07:04.818 "method": "bdev_nvme_attach_controller" 00:07:04.818 }, 00:07:04.818 { 00:07:04.818 "method": "bdev_wait_for_examine" 00:07:04.818 } 00:07:04.818 ] 00:07:04.818 } 00:07:04.818 ] 00:07:04.818 } 00:07:05.076 [2024-11-08 07:34:22.793030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.076 [2024-11-08 07:34:22.842962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.076 [2024-11-08 07:34:22.884916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.076  [2024-11-08T07:34:23.297Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:05.336 00:07:05.336 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.336 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:05.336 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:05.336 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:05.336 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:05.336 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:05.336 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:05.336 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:05.336 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:05.336 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:05.336 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:05.336 { 00:07:05.336 "subsystems": [ 00:07:05.336 { 00:07:05.336 "subsystem": "bdev", 00:07:05.336 "config": [ 00:07:05.336 { 00:07:05.336 "params": { 00:07:05.336 "trtype": "pcie", 00:07:05.336 "traddr": "0000:00:10.0", 00:07:05.336 "name": "Nvme0" 00:07:05.336 }, 00:07:05.336 "method": "bdev_nvme_attach_controller" 00:07:05.336 }, 00:07:05.336 { 00:07:05.336 "method": "bdev_wait_for_examine" 00:07:05.336 } 00:07:05.336 ] 00:07:05.336 } 00:07:05.336 ] 00:07:05.336 } 00:07:05.336 [2024-11-08 07:34:23.206856] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:05.336 [2024-11-08 07:34:23.207137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59759 ] 00:07:05.594 [2024-11-08 07:34:23.347652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.594 [2024-11-08 07:34:23.399613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.594 [2024-11-08 07:34:23.441464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.594  [2024-11-08T07:34:23.814Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:05.853 00:07:05.853 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:05.853 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:05.853 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:05.853 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:05.853 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:05.853 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:05.853 07:34:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.421 07:34:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:06.421 07:34:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:06.421 07:34:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.421 07:34:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.421 { 00:07:06.421 "subsystems": [ 00:07:06.421 { 00:07:06.421 "subsystem": "bdev", 00:07:06.421 "config": [ 00:07:06.421 { 00:07:06.421 "params": { 00:07:06.421 "trtype": "pcie", 00:07:06.421 "traddr": "0000:00:10.0", 00:07:06.421 "name": "Nvme0" 00:07:06.421 }, 00:07:06.421 "method": "bdev_nvme_attach_controller" 00:07:06.421 }, 00:07:06.421 { 00:07:06.421 "method": "bdev_wait_for_examine" 00:07:06.421 } 00:07:06.421 ] 00:07:06.421 } 00:07:06.421 ] 00:07:06.421 } 00:07:06.421 [2024-11-08 07:34:24.339924] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:06.421 [2024-11-08 07:34:24.340199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59778 ] 00:07:06.680 [2024-11-08 07:34:24.489546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.680 [2024-11-08 07:34:24.539852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.680 [2024-11-08 07:34:24.582094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.939  [2024-11-08T07:34:24.900Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:06.939 00:07:06.939 07:34:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:06.939 07:34:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:06.939 07:34:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.939 07:34:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.939 { 00:07:06.939 "subsystems": [ 00:07:06.939 { 00:07:06.939 "subsystem": "bdev", 00:07:06.939 "config": [ 00:07:06.939 { 00:07:06.939 "params": { 00:07:06.939 "trtype": "pcie", 00:07:06.939 "traddr": "0000:00:10.0", 00:07:06.939 "name": "Nvme0" 00:07:06.939 }, 00:07:06.939 "method": "bdev_nvme_attach_controller" 00:07:06.939 }, 00:07:06.939 { 00:07:06.939 "method": "bdev_wait_for_examine" 00:07:06.939 } 00:07:06.939 ] 00:07:06.939 } 00:07:06.939 ] 00:07:06.939 } 00:07:07.199 [2024-11-08 07:34:24.899217] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:07.199 [2024-11-08 07:34:24.899315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59792 ] 00:07:07.199 [2024-11-08 07:34:25.049953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.199 [2024-11-08 07:34:25.098675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.199 [2024-11-08 07:34:25.140555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.525  [2024-11-08T07:34:25.486Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:07.525 00:07:07.525 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.525 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:07.525 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:07.525 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:07.525 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:07.525 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:07.525 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:07.525 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:07.525 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:07.525 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:07.525 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:07.525 { 00:07:07.525 "subsystems": [ 00:07:07.525 { 00:07:07.525 "subsystem": "bdev", 00:07:07.525 "config": [ 00:07:07.525 { 00:07:07.525 "params": { 00:07:07.525 "trtype": "pcie", 00:07:07.525 "traddr": "0000:00:10.0", 00:07:07.525 "name": "Nvme0" 00:07:07.525 }, 00:07:07.525 "method": "bdev_nvme_attach_controller" 00:07:07.525 }, 00:07:07.525 { 00:07:07.525 "method": "bdev_wait_for_examine" 00:07:07.525 } 00:07:07.525 ] 00:07:07.525 } 00:07:07.525 ] 00:07:07.525 } 00:07:07.525 [2024-11-08 07:34:25.459430] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:07.525 [2024-11-08 07:34:25.459673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59813 ] 00:07:07.819 [2024-11-08 07:34:25.610309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.819 [2024-11-08 07:34:25.659195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.819 [2024-11-08 07:34:25.700806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.077  [2024-11-08T07:34:26.038Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:08.077 00:07:08.077 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:08.077 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:08.077 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:08.077 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:08.077 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:08.077 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:08.077 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:08.077 07:34:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.644 07:34:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:08.644 07:34:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:08.644 07:34:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:08.644 07:34:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.644 { 00:07:08.644 "subsystems": [ 00:07:08.644 { 00:07:08.644 "subsystem": "bdev", 00:07:08.644 "config": [ 00:07:08.644 { 00:07:08.644 "params": { 00:07:08.644 "trtype": "pcie", 00:07:08.644 "traddr": "0000:00:10.0", 00:07:08.644 "name": "Nvme0" 00:07:08.644 }, 00:07:08.644 "method": "bdev_nvme_attach_controller" 00:07:08.644 }, 00:07:08.644 { 00:07:08.644 "method": "bdev_wait_for_examine" 00:07:08.644 } 00:07:08.644 ] 00:07:08.644 } 00:07:08.644 ] 00:07:08.644 } 00:07:08.644 [2024-11-08 07:34:26.469145] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:08.644 [2024-11-08 07:34:26.469238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59831 ] 00:07:08.903 [2024-11-08 07:34:26.619735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.903 [2024-11-08 07:34:26.664817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.904 [2024-11-08 07:34:26.706284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.904  [2024-11-08T07:34:27.123Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:09.162 00:07:09.162 07:34:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:09.162 07:34:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:09.162 07:34:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:09.162 07:34:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.162 [2024-11-08 07:34:27.013661] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:09.162 [2024-11-08 07:34:27.013757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59840 ] 00:07:09.162 { 00:07:09.162 "subsystems": [ 00:07:09.162 { 00:07:09.162 "subsystem": "bdev", 00:07:09.162 "config": [ 00:07:09.162 { 00:07:09.162 "params": { 00:07:09.162 "trtype": "pcie", 00:07:09.162 "traddr": "0000:00:10.0", 00:07:09.162 "name": "Nvme0" 00:07:09.162 }, 00:07:09.162 "method": "bdev_nvme_attach_controller" 00:07:09.162 }, 00:07:09.162 { 00:07:09.162 "method": "bdev_wait_for_examine" 00:07:09.162 } 00:07:09.162 ] 00:07:09.162 } 00:07:09.162 ] 00:07:09.162 } 00:07:09.421 [2024-11-08 07:34:27.162898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.421 [2024-11-08 07:34:27.211603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.421 [2024-11-08 07:34:27.253828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.421  [2024-11-08T07:34:27.641Z] Copying: 48/48 [kB] (average 23 MBps) 00:07:09.680 00:07:09.680 07:34:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.680 07:34:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:09.680 07:34:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:09.680 07:34:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:09.680 07:34:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:09.680 07:34:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:09.680 07:34:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:09.680 07:34:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:09.680 07:34:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:09.680 07:34:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:09.680 07:34:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.680 { 00:07:09.680 "subsystems": [ 00:07:09.680 { 00:07:09.680 "subsystem": "bdev", 00:07:09.680 "config": [ 00:07:09.680 { 00:07:09.680 "params": { 00:07:09.680 "trtype": "pcie", 00:07:09.680 "traddr": "0000:00:10.0", 00:07:09.680 "name": "Nvme0" 00:07:09.680 }, 00:07:09.680 "method": "bdev_nvme_attach_controller" 00:07:09.680 }, 00:07:09.680 { 00:07:09.680 "method": "bdev_wait_for_examine" 00:07:09.681 } 00:07:09.681 ] 00:07:09.681 } 00:07:09.681 ] 00:07:09.681 } 00:07:09.681 [2024-11-08 07:34:27.570499] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:09.681 [2024-11-08 07:34:27.570753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59861 ] 00:07:09.939 [2024-11-08 07:34:27.722365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.939 [2024-11-08 07:34:27.767520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.939 [2024-11-08 07:34:27.809281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.198  [2024-11-08T07:34:28.159Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:10.198 00:07:10.198 07:34:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:10.198 07:34:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:10.198 07:34:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:10.198 07:34:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:10.198 07:34:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:10.198 07:34:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:10.198 07:34:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.766 07:34:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:10.766 07:34:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:10.766 07:34:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.766 07:34:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.766 [2024-11-08 07:34:28.520595] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:10.766 [2024-11-08 07:34:28.520784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59879 ] 00:07:10.766 { 00:07:10.766 "subsystems": [ 00:07:10.766 { 00:07:10.766 "subsystem": "bdev", 00:07:10.766 "config": [ 00:07:10.766 { 00:07:10.766 "params": { 00:07:10.766 "trtype": "pcie", 00:07:10.766 "traddr": "0000:00:10.0", 00:07:10.766 "name": "Nvme0" 00:07:10.766 }, 00:07:10.766 "method": "bdev_nvme_attach_controller" 00:07:10.766 }, 00:07:10.766 { 00:07:10.766 "method": "bdev_wait_for_examine" 00:07:10.766 } 00:07:10.766 ] 00:07:10.766 } 00:07:10.766 ] 00:07:10.766 } 00:07:10.766 [2024-11-08 07:34:28.661181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.766 [2024-11-08 07:34:28.711520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.024 [2024-11-08 07:34:28.753455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.024  [2024-11-08T07:34:29.244Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:11.283 00:07:11.283 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:11.283 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:11.283 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.283 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.283 { 00:07:11.283 "subsystems": [ 00:07:11.283 { 00:07:11.283 "subsystem": "bdev", 00:07:11.283 "config": [ 00:07:11.283 { 00:07:11.283 "params": { 00:07:11.283 "trtype": "pcie", 00:07:11.283 "traddr": "0000:00:10.0", 00:07:11.283 "name": "Nvme0" 00:07:11.283 }, 00:07:11.283 "method": "bdev_nvme_attach_controller" 00:07:11.283 }, 00:07:11.283 { 00:07:11.283 "method": "bdev_wait_for_examine" 00:07:11.283 } 00:07:11.283 ] 00:07:11.283 } 00:07:11.283 ] 00:07:11.283 } 00:07:11.283 [2024-11-08 07:34:29.067019] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:11.283 [2024-11-08 07:34:29.067269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59888 ] 00:07:11.283 [2024-11-08 07:34:29.215907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.541 [2024-11-08 07:34:29.261606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.541 [2024-11-08 07:34:29.303774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.541  [2024-11-08T07:34:29.768Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:11.807 00:07:11.807 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:11.807 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:11.807 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:11.807 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:11.807 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:11.807 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:11.807 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:11.807 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:11.807 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:11.807 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.807 07:34:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.807 [2024-11-08 07:34:29.608657] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:11.807 [2024-11-08 07:34:29.608851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59909 ] 00:07:11.807 { 00:07:11.807 "subsystems": [ 00:07:11.807 { 00:07:11.807 "subsystem": "bdev", 00:07:11.807 "config": [ 00:07:11.807 { 00:07:11.807 "params": { 00:07:11.807 "trtype": "pcie", 00:07:11.807 "traddr": "0000:00:10.0", 00:07:11.807 "name": "Nvme0" 00:07:11.807 }, 00:07:11.807 "method": "bdev_nvme_attach_controller" 00:07:11.807 }, 00:07:11.807 { 00:07:11.807 "method": "bdev_wait_for_examine" 00:07:11.807 } 00:07:11.807 ] 00:07:11.807 } 00:07:11.807 ] 00:07:11.807 } 00:07:11.807 [2024-11-08 07:34:29.746829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.065 [2024-11-08 07:34:29.794154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.065 [2024-11-08 07:34:29.836382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.065  [2024-11-08T07:34:30.284Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:12.323 00:07:12.323 00:07:12.323 real 0m12.965s 00:07:12.323 user 0m9.054s 00:07:12.323 sys 0m4.897s 00:07:12.323 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.323 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.323 ************************************ 00:07:12.323 END TEST dd_rw 00:07:12.323 ************************************ 00:07:12.323 07:34:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:12.323 07:34:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:12.323 07:34:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.323 07:34:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.323 ************************************ 00:07:12.323 START TEST dd_rw_offset 00:07:12.323 ************************************ 00:07:12.323 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1127 -- # basic_offset 00:07:12.323 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:12.323 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:12.324 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:12.324 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:12.324 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:12.324 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=czwv3y16plwouh26q051e34wszc2yjfaztl5k1pllgmagtrenzlfjjmnfu8jyaaovju614yuzoxbq6708ty2pnzui671tsvwgv583det5giwgelh08bewfqcx87zjj9toc619syz64gqjktyc8i5pb9z0w2j8r1p0y218749d1qmouz45xtngk1r0fqq4nu0ajr32rx004tjggg7um9neffm38tuao4j0fkb7ygtatfuurgnp48p7kd5v84v3o0rfekrq0vy6thgx0y2zyqxls794n9fx0fb4edrk26mchqipt2huh3bydyc8283uolrt1djymipquhjjnimzzgjzp4twq8dwh81yvwfa7u91hhkzf9cfchjbad6wvgyuobh2j4ae7m9ri4cdxpgdijj9wxz0dxlreu5noi1b6ne0mvxhle7zi6hl1znbgdiqgat5t8xxidci6r7bkmm91y9ojgu9y81benv3sakn8rnum6o5bw4ib2hxuhfd8hthvx6udx114v93s1uvz7ehfkheaeuzncjj5xo8aacy4rgrwmrv76cmdsx6jj722qjcz8w0frhzi1rbfpqaqj2mxkabcmuwhdtpu71jjj487o1hb7k5r12dwzyy8zx0sewx8m6z230chclnjalq5nhrhpe3b2mkj28dme8de0lb694h4zaaiwzcl5wi698c5j1xg3nr53g13ggvvllea79imftn844p29zt503nc2rwk2e0zl6let08x81pg0o79acid6qhraut6ckpvk9758vuudny5a95b529h1wq8njphvhidpr7htf0fvm4a2dhka99jnxkql4fvor46o4o4776uuq7vi3eif59q1h7e8xl0wiax1yvoinjykngl7qep8y9i8huvo9g9tucm7mph8itm12lnfw50d0mbvdzkozafn31n2q5a5qk63z4atyb2bkfmhnjfk366rbqytwmdl0luvwqy0vynr76d4y6mo072lrs0c0o119157bs7rxpzxmurag5gweg529ok20a0lmdjll0ywvcgce2xewxl6cnm7qnnm9cw7etl1xii044icucxndhzrroqhnfzbf9qvqcty12oys2izul7i8al6qwngze15cznbgb2uzravfflrtz64laudzmdj02bekp86juk1gvph06unm24gudiz08ofqanij4g094uixqzz93sfbyoqqpciw5a0gynw6tpk4zs73ayer0odqy80y2rq7a6dgeioztlmay7tl78721809dvjk1148jqvxzw6b4p1234rd5g4rsmka7zwxn8s33mkh2vn6a1jdty5tvr95pcv65wec7q1wcjjbs5zvpas1ghy6zfkwiic5gop38kmnr224e0umlgg33qalenwp4643ep5bpxttxt0lf435dzgcuajvlxclvsuu07sdp5hf18lks1m80h3nsdj1154ib9iuf4h3grpxjvlrlmk3cvs5v2ib8n3jyp9ntbrgz2zmv1bb92iuepdupuiah0qqayyabsfhzlh984a2c0b8c7zbi5q0vrw3nhokflu2egpfwnxe8uuo613n4bwe3785wu9pucz4q9qrxwy4fqp17t4n02sk6ydg8wn9n2kg90gqhuirrmkepc0vjptp7p500ogmgvh26bae33a4m5v73cyjhfya5lystani2aiag3za9208yhrzz4tjqk7084eg3p4x9hay0ww92mw19xxe0a9d4ut7lrh9rg8wpj06m3yzydfnm3uy5hcyenic8fa7520mrlixbhcr78vlxw3h3e891uwuy96e6rvgn8pvzovxwyd4gp1y53ugpx7mvhc97y3bqfoplpuymzvau2c72r7m8s7keymt86qnntx7vfyu4mq362lop0cngj7fz6w6irfz7hk59iuztlipj2w9r7tevvlft5hosxcqkdel2zquzkm3r17c7i9zxm4l7v1w8isjjo1yeh6dr3fvpvw4x2rve1pzqx9hd35mjodo330efykai36o8mdhohumnkshf4145z4ukzdl03ed5pan2awefh6yo3twrf2wf11lk9dpoxpaq4qkup7hskib43tc7egu2heh3ahk1miv2njpucpvv2igipq6sc6acyvucjxnfgnpxzvzx8sn2hykl0b7hyw359kqqh5a9z3hxsmwb7a56stsdm24qhfl526kv53ohzbbutl2925qplg8cda7tz6r4gbvo5tq0zrklst8c2si6z9w7ke4fi000pihbxwr371ko6cwehiveok9zd1exvbpto3vsd1so1ejsqb0lujxjv7krnqmmtq8tz9zglnrwev2drqznvydaejn41wba6qv47tvruhv25uu4gae6tprxloyempqcul9b498h9mrewq8ek65gfc0a6m9afw97p8xef3h0ec5t4ugzwhof6wslafn4ft1kvsorn8974k7p5m7z3yctj832bvejti4jswgeo3djtbxpmv0i678p5f1b4ximuydapycw008zi7tgi0ymv9u0svzqdyrsadz6aw29fix3ggrrrrdx44sil0zdivtihp5vh0uubo9n50f3z6ubrs66e6r5l49kf2zgoubukeelg53edqc7kxa2ir3nbrmcx2j27q8stit7surd7qbecko6ziwh60cnjd5xxfz6cxym1tadcm8bh0jdp5zyz6l19qxce3lh298qxwf5544ofl4kznqy81fmktzx4dwky7ldyzp8n4eeb01wmyjk5sc6yrbpp9wpjsnzt876u7solwkf3413gehroo9g6t9ct61orzxecx494hzeqxyamogbu4s0zisnpgvr87kcykp3r3b69c6v8xool4j7xrtkn347rm4ml2gb5lng8xwmk6ygg2fspst20m7szrra5fytz1dkyy54f4c0w98sxjvh63lgvt3fftkid7z9oml5wl0bq3jt075jxzdsrfbzlhdkvdxt40d5tsukougvigr01lqm0rtu096wn2tnxr8y8ebu2jqqlzy7cv44wx5h9leqe6dlnfmwjcjwkv2clh0lviklt8ugwhkjqxwwyefj1vf16vftiay1mkjtvvwhtnpmj1h1uaelvmd18wah9n12tzgdgukr1w5hn5yl4k56amwhp3eh6rgcsdtcuqmx1tbctxbnjozzlldev3fc3coa87wg7rbw3tt9npw4wzk4ztfe5uxjksr1914dsw38ikh3hhcbpwv9ot46p8cvcd1dzweqldvceybmaztztm3prllgj0faheixezawq94sy9lprvlpy744i5m9ta7rw8iiqe7cuq7m0oqew367dj5jb6pxs2myhpct6ybyqrfhcsfrj4jvj0dfv8evl0vz3ipkbh08vbf9bogu2i5wwkzjeoprsders1vodgsk6g4jxmqkww0oht037c8hynpjegim559ljt50dsng7kvhox2mtego3p0vpckxnicqdgmo3xerzs73d322ydtr6y9fi9zitngyeq9q57eejqbjw08z82r8tdm3quuz44tpdolg2wkyfoyox836z3p4bvd5vgba26mzregyutr4mh8ed7sps0fnfbwnfc9nu4vowive56pbqtkg61r4d4bfhjb19wxdu44ludzjydv5xale0jhr8qn232y12kdwd7owftt2pqa4opuw7yws1f9tu8f15fbttyp5ndmd8njfw27kruethu7tflyzyyrpp5ilddql5yo185yhun9gdb90rgtlmgi0vi1vo9kaphrqe5az58740nrmh606h51ll4ydljendwk65wwmn6cmd5ly2qnhdt233y3c1fny0rskyixwcjsuz9cgxsq4bjnnys6cicjql4rhk46onmt2a9a3wfn80mqve1bbwtdg067zc0v2arpu3pbr197v1n2knz8hul4m1hw3yyx3vx9i9szk1ixa6tg6sv6kjehdf71ndp9i1rspy0dpazxrurgw1wz8a3q44ydtb12azyzxlfmczz9vkgm0psz0tban1904q04rm1aglaqdy0cwdzqxqvrhag8acyito1tqxin0w3y134l71rgk9od 00:07:12.324 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:12.324 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:12.324 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:12.324 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:12.324 { 00:07:12.324 "subsystems": [ 00:07:12.324 { 00:07:12.324 "subsystem": "bdev", 00:07:12.324 "config": [ 00:07:12.324 { 00:07:12.324 "params": { 00:07:12.324 "trtype": "pcie", 00:07:12.324 "traddr": "0000:00:10.0", 00:07:12.324 "name": "Nvme0" 00:07:12.324 }, 00:07:12.324 "method": "bdev_nvme_attach_controller" 00:07:12.324 }, 00:07:12.324 { 00:07:12.324 "method": "bdev_wait_for_examine" 00:07:12.324 } 00:07:12.324 ] 00:07:12.324 } 00:07:12.324 ] 00:07:12.324 } 00:07:12.324 [2024-11-08 07:34:30.275389] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:12.324 [2024-11-08 07:34:30.275484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59938 ] 00:07:12.583 [2024-11-08 07:34:30.425899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.583 [2024-11-08 07:34:30.472364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.583 [2024-11-08 07:34:30.515692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.843  [2024-11-08T07:34:30.804Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:12.843 00:07:12.843 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:12.843 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:12.843 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:12.844 07:34:30 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:13.103 { 00:07:13.103 "subsystems": [ 00:07:13.103 { 00:07:13.103 "subsystem": "bdev", 00:07:13.103 "config": [ 00:07:13.103 { 00:07:13.103 "params": { 00:07:13.103 "trtype": "pcie", 00:07:13.103 "traddr": "0000:00:10.0", 00:07:13.103 "name": "Nvme0" 00:07:13.103 }, 00:07:13.103 "method": "bdev_nvme_attach_controller" 00:07:13.103 }, 00:07:13.103 { 00:07:13.103 "method": "bdev_wait_for_examine" 00:07:13.103 } 00:07:13.103 ] 00:07:13.103 } 00:07:13.103 ] 00:07:13.103 } 00:07:13.103 [2024-11-08 07:34:30.833857] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:13.103 [2024-11-08 07:34:30.833946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59953 ] 00:07:13.103 [2024-11-08 07:34:30.981703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.103 [2024-11-08 07:34:31.026871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.362 [2024-11-08 07:34:31.068779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.362  [2024-11-08T07:34:31.323Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:13.362 00:07:13.622 07:34:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:13.622 07:34:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ czwv3y16plwouh26q051e34wszc2yjfaztl5k1pllgmagtrenzlfjjmnfu8jyaaovju614yuzoxbq6708ty2pnzui671tsvwgv583det5giwgelh08bewfqcx87zjj9toc619syz64gqjktyc8i5pb9z0w2j8r1p0y218749d1qmouz45xtngk1r0fqq4nu0ajr32rx004tjggg7um9neffm38tuao4j0fkb7ygtatfuurgnp48p7kd5v84v3o0rfekrq0vy6thgx0y2zyqxls794n9fx0fb4edrk26mchqipt2huh3bydyc8283uolrt1djymipquhjjnimzzgjzp4twq8dwh81yvwfa7u91hhkzf9cfchjbad6wvgyuobh2j4ae7m9ri4cdxpgdijj9wxz0dxlreu5noi1b6ne0mvxhle7zi6hl1znbgdiqgat5t8xxidci6r7bkmm91y9ojgu9y81benv3sakn8rnum6o5bw4ib2hxuhfd8hthvx6udx114v93s1uvz7ehfkheaeuzncjj5xo8aacy4rgrwmrv76cmdsx6jj722qjcz8w0frhzi1rbfpqaqj2mxkabcmuwhdtpu71jjj487o1hb7k5r12dwzyy8zx0sewx8m6z230chclnjalq5nhrhpe3b2mkj28dme8de0lb694h4zaaiwzcl5wi698c5j1xg3nr53g13ggvvllea79imftn844p29zt503nc2rwk2e0zl6let08x81pg0o79acid6qhraut6ckpvk9758vuudny5a95b529h1wq8njphvhidpr7htf0fvm4a2dhka99jnxkql4fvor46o4o4776uuq7vi3eif59q1h7e8xl0wiax1yvoinjykngl7qep8y9i8huvo9g9tucm7mph8itm12lnfw50d0mbvdzkozafn31n2q5a5qk63z4atyb2bkfmhnjfk366rbqytwmdl0luvwqy0vynr76d4y6mo072lrs0c0o119157bs7rxpzxmurag5gweg529ok20a0lmdjll0ywvcgce2xewxl6cnm7qnnm9cw7etl1xii044icucxndhzrroqhnfzbf9qvqcty12oys2izul7i8al6qwngze15cznbgb2uzravfflrtz64laudzmdj02bekp86juk1gvph06unm24gudiz08ofqanij4g094uixqzz93sfbyoqqpciw5a0gynw6tpk4zs73ayer0odqy80y2rq7a6dgeioztlmay7tl78721809dvjk1148jqvxzw6b4p1234rd5g4rsmka7zwxn8s33mkh2vn6a1jdty5tvr95pcv65wec7q1wcjjbs5zvpas1ghy6zfkwiic5gop38kmnr224e0umlgg33qalenwp4643ep5bpxttxt0lf435dzgcuajvlxclvsuu07sdp5hf18lks1m80h3nsdj1154ib9iuf4h3grpxjvlrlmk3cvs5v2ib8n3jyp9ntbrgz2zmv1bb92iuepdupuiah0qqayyabsfhzlh984a2c0b8c7zbi5q0vrw3nhokflu2egpfwnxe8uuo613n4bwe3785wu9pucz4q9qrxwy4fqp17t4n02sk6ydg8wn9n2kg90gqhuirrmkepc0vjptp7p500ogmgvh26bae33a4m5v73cyjhfya5lystani2aiag3za9208yhrzz4tjqk7084eg3p4x9hay0ww92mw19xxe0a9d4ut7lrh9rg8wpj06m3yzydfnm3uy5hcyenic8fa7520mrlixbhcr78vlxw3h3e891uwuy96e6rvgn8pvzovxwyd4gp1y53ugpx7mvhc97y3bqfoplpuymzvau2c72r7m8s7keymt86qnntx7vfyu4mq362lop0cngj7fz6w6irfz7hk59iuztlipj2w9r7tevvlft5hosxcqkdel2zquzkm3r17c7i9zxm4l7v1w8isjjo1yeh6dr3fvpvw4x2rve1pzqx9hd35mjodo330efykai36o8mdhohumnkshf4145z4ukzdl03ed5pan2awefh6yo3twrf2wf11lk9dpoxpaq4qkup7hskib43tc7egu2heh3ahk1miv2njpucpvv2igipq6sc6acyvucjxnfgnpxzvzx8sn2hykl0b7hyw359kqqh5a9z3hxsmwb7a56stsdm24qhfl526kv53ohzbbutl2925qplg8cda7tz6r4gbvo5tq0zrklst8c2si6z9w7ke4fi000pihbxwr371ko6cwehiveok9zd1exvbpto3vsd1so1ejsqb0lujxjv7krnqmmtq8tz9zglnrwev2drqznvydaejn41wba6qv47tvruhv25uu4gae6tprxloyempqcul9b498h9mrewq8ek65gfc0a6m9afw97p8xef3h0ec5t4ugzwhof6wslafn4ft1kvsorn8974k7p5m7z3yctj832bvejti4jswgeo3djtbxpmv0i678p5f1b4ximuydapycw008zi7tgi0ymv9u0svzqdyrsadz6aw29fix3ggrrrrdx44sil0zdivtihp5vh0uubo9n50f3z6ubrs66e6r5l49kf2zgoubukeelg53edqc7kxa2ir3nbrmcx2j27q8stit7surd7qbecko6ziwh60cnjd5xxfz6cxym1tadcm8bh0jdp5zyz6l19qxce3lh298qxwf5544ofl4kznqy81fmktzx4dwky7ldyzp8n4eeb01wmyjk5sc6yrbpp9wpjsnzt876u7solwkf3413gehroo9g6t9ct61orzxecx494hzeqxyamogbu4s0zisnpgvr87kcykp3r3b69c6v8xool4j7xrtkn347rm4ml2gb5lng8xwmk6ygg2fspst20m7szrra5fytz1dkyy54f4c0w98sxjvh63lgvt3fftkid7z9oml5wl0bq3jt075jxzdsrfbzlhdkvdxt40d5tsukougvigr01lqm0rtu096wn2tnxr8y8ebu2jqqlzy7cv44wx5h9leqe6dlnfmwjcjwkv2clh0lviklt8ugwhkjqxwwyefj1vf16vftiay1mkjtvvwhtnpmj1h1uaelvmd18wah9n12tzgdgukr1w5hn5yl4k56amwhp3eh6rgcsdtcuqmx1tbctxbnjozzlldev3fc3coa87wg7rbw3tt9npw4wzk4ztfe5uxjksr1914dsw38ikh3hhcbpwv9ot46p8cvcd1dzweqldvceybmaztztm3prllgj0faheixezawq94sy9lprvlpy744i5m9ta7rw8iiqe7cuq7m0oqew367dj5jb6pxs2myhpct6ybyqrfhcsfrj4jvj0dfv8evl0vz3ipkbh08vbf9bogu2i5wwkzjeoprsders1vodgsk6g4jxmqkww0oht037c8hynpjegim559ljt50dsng7kvhox2mtego3p0vpckxnicqdgmo3xerzs73d322ydtr6y9fi9zitngyeq9q57eejqbjw08z82r8tdm3quuz44tpdolg2wkyfoyox836z3p4bvd5vgba26mzregyutr4mh8ed7sps0fnfbwnfc9nu4vowive56pbqtkg61r4d4bfhjb19wxdu44ludzjydv5xale0jhr8qn232y12kdwd7owftt2pqa4opuw7yws1f9tu8f15fbttyp5ndmd8njfw27kruethu7tflyzyyrpp5ilddql5yo185yhun9gdb90rgtlmgi0vi1vo9kaphrqe5az58740nrmh606h51ll4ydljendwk65wwmn6cmd5ly2qnhdt233y3c1fny0rskyixwcjsuz9cgxsq4bjnnys6cicjql4rhk46onmt2a9a3wfn80mqve1bbwtdg067zc0v2arpu3pbr197v1n2knz8hul4m1hw3yyx3vx9i9szk1ixa6tg6sv6kjehdf71ndp9i1rspy0dpazxrurgw1wz8a3q44ydtb12azyzxlfmczz9vkgm0psz0tban1904q04rm1aglaqdy0cwdzqxqvrhag8acyito1tqxin0w3y134l71rgk9od == \c\z\w\v\3\y\1\6\p\l\w\o\u\h\2\6\q\0\5\1\e\3\4\w\s\z\c\2\y\j\f\a\z\t\l\5\k\1\p\l\l\g\m\a\g\t\r\e\n\z\l\f\j\j\m\n\f\u\8\j\y\a\a\o\v\j\u\6\1\4\y\u\z\o\x\b\q\6\7\0\8\t\y\2\p\n\z\u\i\6\7\1\t\s\v\w\g\v\5\8\3\d\e\t\5\g\i\w\g\e\l\h\0\8\b\e\w\f\q\c\x\8\7\z\j\j\9\t\o\c\6\1\9\s\y\z\6\4\g\q\j\k\t\y\c\8\i\5\p\b\9\z\0\w\2\j\8\r\1\p\0\y\2\1\8\7\4\9\d\1\q\m\o\u\z\4\5\x\t\n\g\k\1\r\0\f\q\q\4\n\u\0\a\j\r\3\2\r\x\0\0\4\t\j\g\g\g\7\u\m\9\n\e\f\f\m\3\8\t\u\a\o\4\j\0\f\k\b\7\y\g\t\a\t\f\u\u\r\g\n\p\4\8\p\7\k\d\5\v\8\4\v\3\o\0\r\f\e\k\r\q\0\v\y\6\t\h\g\x\0\y\2\z\y\q\x\l\s\7\9\4\n\9\f\x\0\f\b\4\e\d\r\k\2\6\m\c\h\q\i\p\t\2\h\u\h\3\b\y\d\y\c\8\2\8\3\u\o\l\r\t\1\d\j\y\m\i\p\q\u\h\j\j\n\i\m\z\z\g\j\z\p\4\t\w\q\8\d\w\h\8\1\y\v\w\f\a\7\u\9\1\h\h\k\z\f\9\c\f\c\h\j\b\a\d\6\w\v\g\y\u\o\b\h\2\j\4\a\e\7\m\9\r\i\4\c\d\x\p\g\d\i\j\j\9\w\x\z\0\d\x\l\r\e\u\5\n\o\i\1\b\6\n\e\0\m\v\x\h\l\e\7\z\i\6\h\l\1\z\n\b\g\d\i\q\g\a\t\5\t\8\x\x\i\d\c\i\6\r\7\b\k\m\m\9\1\y\9\o\j\g\u\9\y\8\1\b\e\n\v\3\s\a\k\n\8\r\n\u\m\6\o\5\b\w\4\i\b\2\h\x\u\h\f\d\8\h\t\h\v\x\6\u\d\x\1\1\4\v\9\3\s\1\u\v\z\7\e\h\f\k\h\e\a\e\u\z\n\c\j\j\5\x\o\8\a\a\c\y\4\r\g\r\w\m\r\v\7\6\c\m\d\s\x\6\j\j\7\2\2\q\j\c\z\8\w\0\f\r\h\z\i\1\r\b\f\p\q\a\q\j\2\m\x\k\a\b\c\m\u\w\h\d\t\p\u\7\1\j\j\j\4\8\7\o\1\h\b\7\k\5\r\1\2\d\w\z\y\y\8\z\x\0\s\e\w\x\8\m\6\z\2\3\0\c\h\c\l\n\j\a\l\q\5\n\h\r\h\p\e\3\b\2\m\k\j\2\8\d\m\e\8\d\e\0\l\b\6\9\4\h\4\z\a\a\i\w\z\c\l\5\w\i\6\9\8\c\5\j\1\x\g\3\n\r\5\3\g\1\3\g\g\v\v\l\l\e\a\7\9\i\m\f\t\n\8\4\4\p\2\9\z\t\5\0\3\n\c\2\r\w\k\2\e\0\z\l\6\l\e\t\0\8\x\8\1\p\g\0\o\7\9\a\c\i\d\6\q\h\r\a\u\t\6\c\k\p\v\k\9\7\5\8\v\u\u\d\n\y\5\a\9\5\b\5\2\9\h\1\w\q\8\n\j\p\h\v\h\i\d\p\r\7\h\t\f\0\f\v\m\4\a\2\d\h\k\a\9\9\j\n\x\k\q\l\4\f\v\o\r\4\6\o\4\o\4\7\7\6\u\u\q\7\v\i\3\e\i\f\5\9\q\1\h\7\e\8\x\l\0\w\i\a\x\1\y\v\o\i\n\j\y\k\n\g\l\7\q\e\p\8\y\9\i\8\h\u\v\o\9\g\9\t\u\c\m\7\m\p\h\8\i\t\m\1\2\l\n\f\w\5\0\d\0\m\b\v\d\z\k\o\z\a\f\n\3\1\n\2\q\5\a\5\q\k\6\3\z\4\a\t\y\b\2\b\k\f\m\h\n\j\f\k\3\6\6\r\b\q\y\t\w\m\d\l\0\l\u\v\w\q\y\0\v\y\n\r\7\6\d\4\y\6\m\o\0\7\2\l\r\s\0\c\0\o\1\1\9\1\5\7\b\s\7\r\x\p\z\x\m\u\r\a\g\5\g\w\e\g\5\2\9\o\k\2\0\a\0\l\m\d\j\l\l\0\y\w\v\c\g\c\e\2\x\e\w\x\l\6\c\n\m\7\q\n\n\m\9\c\w\7\e\t\l\1\x\i\i\0\4\4\i\c\u\c\x\n\d\h\z\r\r\o\q\h\n\f\z\b\f\9\q\v\q\c\t\y\1\2\o\y\s\2\i\z\u\l\7\i\8\a\l\6\q\w\n\g\z\e\1\5\c\z\n\b\g\b\2\u\z\r\a\v\f\f\l\r\t\z\6\4\l\a\u\d\z\m\d\j\0\2\b\e\k\p\8\6\j\u\k\1\g\v\p\h\0\6\u\n\m\2\4\g\u\d\i\z\0\8\o\f\q\a\n\i\j\4\g\0\9\4\u\i\x\q\z\z\9\3\s\f\b\y\o\q\q\p\c\i\w\5\a\0\g\y\n\w\6\t\p\k\4\z\s\7\3\a\y\e\r\0\o\d\q\y\8\0\y\2\r\q\7\a\6\d\g\e\i\o\z\t\l\m\a\y\7\t\l\7\8\7\2\1\8\0\9\d\v\j\k\1\1\4\8\j\q\v\x\z\w\6\b\4\p\1\2\3\4\r\d\5\g\4\r\s\m\k\a\7\z\w\x\n\8\s\3\3\m\k\h\2\v\n\6\a\1\j\d\t\y\5\t\v\r\9\5\p\c\v\6\5\w\e\c\7\q\1\w\c\j\j\b\s\5\z\v\p\a\s\1\g\h\y\6\z\f\k\w\i\i\c\5\g\o\p\3\8\k\m\n\r\2\2\4\e\0\u\m\l\g\g\3\3\q\a\l\e\n\w\p\4\6\4\3\e\p\5\b\p\x\t\t\x\t\0\l\f\4\3\5\d\z\g\c\u\a\j\v\l\x\c\l\v\s\u\u\0\7\s\d\p\5\h\f\1\8\l\k\s\1\m\8\0\h\3\n\s\d\j\1\1\5\4\i\b\9\i\u\f\4\h\3\g\r\p\x\j\v\l\r\l\m\k\3\c\v\s\5\v\2\i\b\8\n\3\j\y\p\9\n\t\b\r\g\z\2\z\m\v\1\b\b\9\2\i\u\e\p\d\u\p\u\i\a\h\0\q\q\a\y\y\a\b\s\f\h\z\l\h\9\8\4\a\2\c\0\b\8\c\7\z\b\i\5\q\0\v\r\w\3\n\h\o\k\f\l\u\2\e\g\p\f\w\n\x\e\8\u\u\o\6\1\3\n\4\b\w\e\3\7\8\5\w\u\9\p\u\c\z\4\q\9\q\r\x\w\y\4\f\q\p\1\7\t\4\n\0\2\s\k\6\y\d\g\8\w\n\9\n\2\k\g\9\0\g\q\h\u\i\r\r\m\k\e\p\c\0\v\j\p\t\p\7\p\5\0\0\o\g\m\g\v\h\2\6\b\a\e\3\3\a\4\m\5\v\7\3\c\y\j\h\f\y\a\5\l\y\s\t\a\n\i\2\a\i\a\g\3\z\a\9\2\0\8\y\h\r\z\z\4\t\j\q\k\7\0\8\4\e\g\3\p\4\x\9\h\a\y\0\w\w\9\2\m\w\1\9\x\x\e\0\a\9\d\4\u\t\7\l\r\h\9\r\g\8\w\p\j\0\6\m\3\y\z\y\d\f\n\m\3\u\y\5\h\c\y\e\n\i\c\8\f\a\7\5\2\0\m\r\l\i\x\b\h\c\r\7\8\v\l\x\w\3\h\3\e\8\9\1\u\w\u\y\9\6\e\6\r\v\g\n\8\p\v\z\o\v\x\w\y\d\4\g\p\1\y\5\3\u\g\p\x\7\m\v\h\c\9\7\y\3\b\q\f\o\p\l\p\u\y\m\z\v\a\u\2\c\7\2\r\7\m\8\s\7\k\e\y\m\t\8\6\q\n\n\t\x\7\v\f\y\u\4\m\q\3\6\2\l\o\p\0\c\n\g\j\7\f\z\6\w\6\i\r\f\z\7\h\k\5\9\i\u\z\t\l\i\p\j\2\w\9\r\7\t\e\v\v\l\f\t\5\h\o\s\x\c\q\k\d\e\l\2\z\q\u\z\k\m\3\r\1\7\c\7\i\9\z\x\m\4\l\7\v\1\w\8\i\s\j\j\o\1\y\e\h\6\d\r\3\f\v\p\v\w\4\x\2\r\v\e\1\p\z\q\x\9\h\d\3\5\m\j\o\d\o\3\3\0\e\f\y\k\a\i\3\6\o\8\m\d\h\o\h\u\m\n\k\s\h\f\4\1\4\5\z\4\u\k\z\d\l\0\3\e\d\5\p\a\n\2\a\w\e\f\h\6\y\o\3\t\w\r\f\2\w\f\1\1\l\k\9\d\p\o\x\p\a\q\4\q\k\u\p\7\h\s\k\i\b\4\3\t\c\7\e\g\u\2\h\e\h\3\a\h\k\1\m\i\v\2\n\j\p\u\c\p\v\v\2\i\g\i\p\q\6\s\c\6\a\c\y\v\u\c\j\x\n\f\g\n\p\x\z\v\z\x\8\s\n\2\h\y\k\l\0\b\7\h\y\w\3\5\9\k\q\q\h\5\a\9\z\3\h\x\s\m\w\b\7\a\5\6\s\t\s\d\m\2\4\q\h\f\l\5\2\6\k\v\5\3\o\h\z\b\b\u\t\l\2\9\2\5\q\p\l\g\8\c\d\a\7\t\z\6\r\4\g\b\v\o\5\t\q\0\z\r\k\l\s\t\8\c\2\s\i\6\z\9\w\7\k\e\4\f\i\0\0\0\p\i\h\b\x\w\r\3\7\1\k\o\6\c\w\e\h\i\v\e\o\k\9\z\d\1\e\x\v\b\p\t\o\3\v\s\d\1\s\o\1\e\j\s\q\b\0\l\u\j\x\j\v\7\k\r\n\q\m\m\t\q\8\t\z\9\z\g\l\n\r\w\e\v\2\d\r\q\z\n\v\y\d\a\e\j\n\4\1\w\b\a\6\q\v\4\7\t\v\r\u\h\v\2\5\u\u\4\g\a\e\6\t\p\r\x\l\o\y\e\m\p\q\c\u\l\9\b\4\9\8\h\9\m\r\e\w\q\8\e\k\6\5\g\f\c\0\a\6\m\9\a\f\w\9\7\p\8\x\e\f\3\h\0\e\c\5\t\4\u\g\z\w\h\o\f\6\w\s\l\a\f\n\4\f\t\1\k\v\s\o\r\n\8\9\7\4\k\7\p\5\m\7\z\3\y\c\t\j\8\3\2\b\v\e\j\t\i\4\j\s\w\g\e\o\3\d\j\t\b\x\p\m\v\0\i\6\7\8\p\5\f\1\b\4\x\i\m\u\y\d\a\p\y\c\w\0\0\8\z\i\7\t\g\i\0\y\m\v\9\u\0\s\v\z\q\d\y\r\s\a\d\z\6\a\w\2\9\f\i\x\3\g\g\r\r\r\r\d\x\4\4\s\i\l\0\z\d\i\v\t\i\h\p\5\v\h\0\u\u\b\o\9\n\5\0\f\3\z\6\u\b\r\s\6\6\e\6\r\5\l\4\9\k\f\2\z\g\o\u\b\u\k\e\e\l\g\5\3\e\d\q\c\7\k\x\a\2\i\r\3\n\b\r\m\c\x\2\j\2\7\q\8\s\t\i\t\7\s\u\r\d\7\q\b\e\c\k\o\6\z\i\w\h\6\0\c\n\j\d\5\x\x\f\z\6\c\x\y\m\1\t\a\d\c\m\8\b\h\0\j\d\p\5\z\y\z\6\l\1\9\q\x\c\e\3\l\h\2\9\8\q\x\w\f\5\5\4\4\o\f\l\4\k\z\n\q\y\8\1\f\m\k\t\z\x\4\d\w\k\y\7\l\d\y\z\p\8\n\4\e\e\b\0\1\w\m\y\j\k\5\s\c\6\y\r\b\p\p\9\w\p\j\s\n\z\t\8\7\6\u\7\s\o\l\w\k\f\3\4\1\3\g\e\h\r\o\o\9\g\6\t\9\c\t\6\1\o\r\z\x\e\c\x\4\9\4\h\z\e\q\x\y\a\m\o\g\b\u\4\s\0\z\i\s\n\p\g\v\r\8\7\k\c\y\k\p\3\r\3\b\6\9\c\6\v\8\x\o\o\l\4\j\7\x\r\t\k\n\3\4\7\r\m\4\m\l\2\g\b\5\l\n\g\8\x\w\m\k\6\y\g\g\2\f\s\p\s\t\2\0\m\7\s\z\r\r\a\5\f\y\t\z\1\d\k\y\y\5\4\f\4\c\0\w\9\8\s\x\j\v\h\6\3\l\g\v\t\3\f\f\t\k\i\d\7\z\9\o\m\l\5\w\l\0\b\q\3\j\t\0\7\5\j\x\z\d\s\r\f\b\z\l\h\d\k\v\d\x\t\4\0\d\5\t\s\u\k\o\u\g\v\i\g\r\0\1\l\q\m\0\r\t\u\0\9\6\w\n\2\t\n\x\r\8\y\8\e\b\u\2\j\q\q\l\z\y\7\c\v\4\4\w\x\5\h\9\l\e\q\e\6\d\l\n\f\m\w\j\c\j\w\k\v\2\c\l\h\0\l\v\i\k\l\t\8\u\g\w\h\k\j\q\x\w\w\y\e\f\j\1\v\f\1\6\v\f\t\i\a\y\1\m\k\j\t\v\v\w\h\t\n\p\m\j\1\h\1\u\a\e\l\v\m\d\1\8\w\a\h\9\n\1\2\t\z\g\d\g\u\k\r\1\w\5\h\n\5\y\l\4\k\5\6\a\m\w\h\p\3\e\h\6\r\g\c\s\d\t\c\u\q\m\x\1\t\b\c\t\x\b\n\j\o\z\z\l\l\d\e\v\3\f\c\3\c\o\a\8\7\w\g\7\r\b\w\3\t\t\9\n\p\w\4\w\z\k\4\z\t\f\e\5\u\x\j\k\s\r\1\9\1\4\d\s\w\3\8\i\k\h\3\h\h\c\b\p\w\v\9\o\t\4\6\p\8\c\v\c\d\1\d\z\w\e\q\l\d\v\c\e\y\b\m\a\z\t\z\t\m\3\p\r\l\l\g\j\0\f\a\h\e\i\x\e\z\a\w\q\9\4\s\y\9\l\p\r\v\l\p\y\7\4\4\i\5\m\9\t\a\7\r\w\8\i\i\q\e\7\c\u\q\7\m\0\o\q\e\w\3\6\7\d\j\5\j\b\6\p\x\s\2\m\y\h\p\c\t\6\y\b\y\q\r\f\h\c\s\f\r\j\4\j\v\j\0\d\f\v\8\e\v\l\0\v\z\3\i\p\k\b\h\0\8\v\b\f\9\b\o\g\u\2\i\5\w\w\k\z\j\e\o\p\r\s\d\e\r\s\1\v\o\d\g\s\k\6\g\4\j\x\m\q\k\w\w\0\o\h\t\0\3\7\c\8\h\y\n\p\j\e\g\i\m\5\5\9\l\j\t\5\0\d\s\n\g\7\k\v\h\o\x\2\m\t\e\g\o\3\p\0\v\p\c\k\x\n\i\c\q\d\g\m\o\3\x\e\r\z\s\7\3\d\3\2\2\y\d\t\r\6\y\9\f\i\9\z\i\t\n\g\y\e\q\9\q\5\7\e\e\j\q\b\j\w\0\8\z\8\2\r\8\t\d\m\3\q\u\u\z\4\4\t\p\d\o\l\g\2\w\k\y\f\o\y\o\x\8\3\6\z\3\p\4\b\v\d\5\v\g\b\a\2\6\m\z\r\e\g\y\u\t\r\4\m\h\8\e\d\7\s\p\s\0\f\n\f\b\w\n\f\c\9\n\u\4\v\o\w\i\v\e\5\6\p\b\q\t\k\g\6\1\r\4\d\4\b\f\h\j\b\1\9\w\x\d\u\4\4\l\u\d\z\j\y\d\v\5\x\a\l\e\0\j\h\r\8\q\n\2\3\2\y\1\2\k\d\w\d\7\o\w\f\t\t\2\p\q\a\4\o\p\u\w\7\y\w\s\1\f\9\t\u\8\f\1\5\f\b\t\t\y\p\5\n\d\m\d\8\n\j\f\w\2\7\k\r\u\e\t\h\u\7\t\f\l\y\z\y\y\r\p\p\5\i\l\d\d\q\l\5\y\o\1\8\5\y\h\u\n\9\g\d\b\9\0\r\g\t\l\m\g\i\0\v\i\1\v\o\9\k\a\p\h\r\q\e\5\a\z\5\8\7\4\0\n\r\m\h\6\0\6\h\5\1\l\l\4\y\d\l\j\e\n\d\w\k\6\5\w\w\m\n\6\c\m\d\5\l\y\2\q\n\h\d\t\2\3\3\y\3\c\1\f\n\y\0\r\s\k\y\i\x\w\c\j\s\u\z\9\c\g\x\s\q\4\b\j\n\n\y\s\6\c\i\c\j\q\l\4\r\h\k\4\6\o\n\m\t\2\a\9\a\3\w\f\n\8\0\m\q\v\e\1\b\b\w\t\d\g\0\6\7\z\c\0\v\2\a\r\p\u\3\p\b\r\1\9\7\v\1\n\2\k\n\z\8\h\u\l\4\m\1\h\w\3\y\y\x\3\v\x\9\i\9\s\z\k\1\i\x\a\6\t\g\6\s\v\6\k\j\e\h\d\f\7\1\n\d\p\9\i\1\r\s\p\y\0\d\p\a\z\x\r\u\r\g\w\1\w\z\8\a\3\q\4\4\y\d\t\b\1\2\a\z\y\z\x\l\f\m\c\z\z\9\v\k\g\m\0\p\s\z\0\t\b\a\n\1\9\0\4\q\0\4\r\m\1\a\g\l\a\q\d\y\0\c\w\d\z\q\x\q\v\r\h\a\g\8\a\c\y\i\t\o\1\t\q\x\i\n\0\w\3\y\1\3\4\l\7\1\r\g\k\9\o\d ]] 00:07:13.622 00:07:13.622 real 0m1.166s 00:07:13.622 user 0m0.766s 00:07:13.622 sys 0m0.519s 00:07:13.622 07:34:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:13.622 ************************************ 00:07:13.622 END TEST dd_rw_offset 00:07:13.622 ************************************ 00:07:13.622 07:34:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:13.622 07:34:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:13.622 07:34:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:13.622 07:34:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:13.622 07:34:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:13.622 07:34:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:13.623 07:34:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:13.623 07:34:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:13.623 07:34:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:13.623 07:34:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:13.623 07:34:31 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:13.623 07:34:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.623 { 00:07:13.623 "subsystems": [ 00:07:13.623 { 00:07:13.623 "subsystem": "bdev", 00:07:13.623 "config": [ 00:07:13.623 { 00:07:13.623 "params": { 00:07:13.623 "trtype": "pcie", 00:07:13.623 "traddr": "0000:00:10.0", 00:07:13.623 "name": "Nvme0" 00:07:13.623 }, 00:07:13.623 "method": "bdev_nvme_attach_controller" 00:07:13.623 }, 00:07:13.623 { 00:07:13.623 "method": "bdev_wait_for_examine" 00:07:13.623 } 00:07:13.623 ] 00:07:13.623 } 00:07:13.623 ] 00:07:13.623 } 00:07:13.623 [2024-11-08 07:34:31.436249] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:13.623 [2024-11-08 07:34:31.436342] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59982 ] 00:07:13.881 [2024-11-08 07:34:31.585754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.881 [2024-11-08 07:34:31.638230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.881 [2024-11-08 07:34:31.680062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.881  [2024-11-08T07:34:32.101Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:14.140 00:07:14.140 07:34:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.140 ************************************ 00:07:14.140 END TEST spdk_dd_basic_rw 00:07:14.140 ************************************ 00:07:14.140 00:07:14.140 real 0m15.911s 00:07:14.140 user 0m10.824s 00:07:14.140 sys 0m6.100s 00:07:14.140 07:34:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:14.140 07:34:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.140 07:34:31 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:14.140 07:34:31 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:14.140 07:34:31 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:14.140 07:34:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:14.140 ************************************ 00:07:14.140 START TEST spdk_dd_posix 00:07:14.140 ************************************ 00:07:14.140 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:14.140 * Looking for test storage... 00:07:14.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lcov --version 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:14.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.400 --rc genhtml_branch_coverage=1 00:07:14.400 --rc genhtml_function_coverage=1 00:07:14.400 --rc genhtml_legend=1 00:07:14.400 --rc geninfo_all_blocks=1 00:07:14.400 --rc geninfo_unexecuted_blocks=1 00:07:14.400 00:07:14.400 ' 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:14.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.400 --rc genhtml_branch_coverage=1 00:07:14.400 --rc genhtml_function_coverage=1 00:07:14.400 --rc genhtml_legend=1 00:07:14.400 --rc geninfo_all_blocks=1 00:07:14.400 --rc geninfo_unexecuted_blocks=1 00:07:14.400 00:07:14.400 ' 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:14.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.400 --rc genhtml_branch_coverage=1 00:07:14.400 --rc genhtml_function_coverage=1 00:07:14.400 --rc genhtml_legend=1 00:07:14.400 --rc geninfo_all_blocks=1 00:07:14.400 --rc geninfo_unexecuted_blocks=1 00:07:14.400 00:07:14.400 ' 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:14.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.400 --rc genhtml_branch_coverage=1 00:07:14.400 --rc genhtml_function_coverage=1 00:07:14.400 --rc genhtml_legend=1 00:07:14.400 --rc geninfo_all_blocks=1 00:07:14.400 --rc geninfo_unexecuted_blocks=1 00:07:14.400 00:07:14.400 ' 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.400 07:34:32 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:14.401 * First test run, liburing in use 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:14.401 ************************************ 00:07:14.401 START TEST dd_flag_append 00:07:14.401 ************************************ 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1127 -- # append 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=3l0jtez9krrcfx1cjuqksmjwh1vmsbvy 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=t40nux67dcg57af86wuts725v34aymga 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 3l0jtez9krrcfx1cjuqksmjwh1vmsbvy 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s t40nux67dcg57af86wuts725v34aymga 00:07:14.401 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:14.401 [2024-11-08 07:34:32.344551] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:14.401 [2024-11-08 07:34:32.344637] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60049 ] 00:07:14.660 [2024-11-08 07:34:32.494063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.660 [2024-11-08 07:34:32.540419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.660 [2024-11-08 07:34:32.581799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.660  [2024-11-08T07:34:32.879Z] Copying: 32/32 [B] (average 31 kBps) 00:07:14.918 00:07:14.918 ************************************ 00:07:14.918 END TEST dd_flag_append 00:07:14.918 ************************************ 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ t40nux67dcg57af86wuts725v34aymga3l0jtez9krrcfx1cjuqksmjwh1vmsbvy == \t\4\0\n\u\x\6\7\d\c\g\5\7\a\f\8\6\w\u\t\s\7\2\5\v\3\4\a\y\m\g\a\3\l\0\j\t\e\z\9\k\r\r\c\f\x\1\c\j\u\q\k\s\m\j\w\h\1\v\m\s\b\v\y ]] 00:07:14.918 00:07:14.918 real 0m0.488s 00:07:14.918 user 0m0.250s 00:07:14.918 sys 0m0.231s 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:14.918 ************************************ 00:07:14.918 START TEST dd_flag_directory 00:07:14.918 ************************************ 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1127 -- # directory 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:14.918 07:34:32 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:15.178 [2024-11-08 07:34:32.880027] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:15.178 [2024-11-08 07:34:32.880095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60083 ] 00:07:15.178 [2024-11-08 07:34:33.019286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.178 [2024-11-08 07:34:33.065774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.178 [2024-11-08 07:34:33.107468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.178 [2024-11-08 07:34:33.135704] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:15.178 [2024-11-08 07:34:33.135751] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:15.178 [2024-11-08 07:34:33.135767] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.436 [2024-11-08 07:34:33.229302] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:15.437 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:15.437 [2024-11-08 07:34:33.352286] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:15.437 [2024-11-08 07:34:33.352534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60087 ] 00:07:15.694 [2024-11-08 07:34:33.501301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.694 [2024-11-08 07:34:33.548046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.694 [2024-11-08 07:34:33.589614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.694 [2024-11-08 07:34:33.617643] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:15.694 [2024-11-08 07:34:33.617689] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:15.694 [2024-11-08 07:34:33.617705] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.957 [2024-11-08 07:34:33.710797] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.957 00:07:15.957 real 0m0.934s 00:07:15.957 user 0m0.494s 00:07:15.957 sys 0m0.232s 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:15.957 ************************************ 00:07:15.957 END TEST dd_flag_directory 00:07:15.957 ************************************ 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:15.957 ************************************ 00:07:15.957 START TEST dd_flag_nofollow 00:07:15.957 ************************************ 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1127 -- # nofollow 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:15.957 07:34:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:15.957 [2024-11-08 07:34:33.908100] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:15.957 [2024-11-08 07:34:33.908197] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60121 ] 00:07:16.217 [2024-11-08 07:34:34.058964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.217 [2024-11-08 07:34:34.104909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.217 [2024-11-08 07:34:34.146423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.217 [2024-11-08 07:34:34.174354] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:16.217 [2024-11-08 07:34:34.174400] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:16.217 [2024-11-08 07:34:34.174417] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.476 [2024-11-08 07:34:34.267963] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:16.476 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:16.476 [2024-11-08 07:34:34.391844] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:16.476 [2024-11-08 07:34:34.391939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60125 ] 00:07:16.735 [2024-11-08 07:34:34.542743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.735 [2024-11-08 07:34:34.588337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.735 [2024-11-08 07:34:34.630418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.735 [2024-11-08 07:34:34.660599] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:16.735 [2024-11-08 07:34:34.660658] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:16.735 [2024-11-08 07:34:34.660685] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.994 [2024-11-08 07:34:34.755488] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:16.994 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:16.994 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.994 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:16.994 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:16.994 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:16.994 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.994 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:16.994 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:16.994 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:16.994 07:34:34 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.994 [2024-11-08 07:34:34.881336] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:16.994 [2024-11-08 07:34:34.881626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60138 ] 00:07:17.253 [2024-11-08 07:34:35.033841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.253 [2024-11-08 07:34:35.082808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.253 [2024-11-08 07:34:35.124232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.253  [2024-11-08T07:34:35.473Z] Copying: 512/512 [B] (average 500 kBps) 00:07:17.512 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 67e9nrsjgtdjqej37alu9safm4qghjxz565ocqvjz2wsod0hpe9ppp6w4hw7ig9z55mb76v5gi2zxkut0bh1yjzsyg8wz72u0jsd792zpnvpc2xgwdc451eterwthvta4i3wb61gt1wvb8fw9qrq0q60l90724tupos59gv427939lfakpmczwjp43w20u6jh923ybrjcfp2a3o08b0o3fj74ltbo9wwomfklf55gj7xvwaxpwmx4emc2rvvuvkr6yg66lx285ap4dtt6fvh04h74gmlcgpi12xtke2r2cfk241j60uk9tzdu84z5qrazs23x8mhwsoxbezoht4kgp15qwfud1x3dc61jgc00cgxqskemnfx0oticeel51vz4u3kvw0mm89g7qhhoa4vx0l6djqjqiehfrh9u40hfa5km7wi59ifladihmicd8bttmiiwlesqdo5vhe72khrxa9f6sy90s4r57mycwvsfbec9x0sbg1deqznvm5iu1ln == \6\7\e\9\n\r\s\j\g\t\d\j\q\e\j\3\7\a\l\u\9\s\a\f\m\4\q\g\h\j\x\z\5\6\5\o\c\q\v\j\z\2\w\s\o\d\0\h\p\e\9\p\p\p\6\w\4\h\w\7\i\g\9\z\5\5\m\b\7\6\v\5\g\i\2\z\x\k\u\t\0\b\h\1\y\j\z\s\y\g\8\w\z\7\2\u\0\j\s\d\7\9\2\z\p\n\v\p\c\2\x\g\w\d\c\4\5\1\e\t\e\r\w\t\h\v\t\a\4\i\3\w\b\6\1\g\t\1\w\v\b\8\f\w\9\q\r\q\0\q\6\0\l\9\0\7\2\4\t\u\p\o\s\5\9\g\v\4\2\7\9\3\9\l\f\a\k\p\m\c\z\w\j\p\4\3\w\2\0\u\6\j\h\9\2\3\y\b\r\j\c\f\p\2\a\3\o\0\8\b\0\o\3\f\j\7\4\l\t\b\o\9\w\w\o\m\f\k\l\f\5\5\g\j\7\x\v\w\a\x\p\w\m\x\4\e\m\c\2\r\v\v\u\v\k\r\6\y\g\6\6\l\x\2\8\5\a\p\4\d\t\t\6\f\v\h\0\4\h\7\4\g\m\l\c\g\p\i\1\2\x\t\k\e\2\r\2\c\f\k\2\4\1\j\6\0\u\k\9\t\z\d\u\8\4\z\5\q\r\a\z\s\2\3\x\8\m\h\w\s\o\x\b\e\z\o\h\t\4\k\g\p\1\5\q\w\f\u\d\1\x\3\d\c\6\1\j\g\c\0\0\c\g\x\q\s\k\e\m\n\f\x\0\o\t\i\c\e\e\l\5\1\v\z\4\u\3\k\v\w\0\m\m\8\9\g\7\q\h\h\o\a\4\v\x\0\l\6\d\j\q\j\q\i\e\h\f\r\h\9\u\4\0\h\f\a\5\k\m\7\w\i\5\9\i\f\l\a\d\i\h\m\i\c\d\8\b\t\t\m\i\i\w\l\e\s\q\d\o\5\v\h\e\7\2\k\h\r\x\a\9\f\6\s\y\9\0\s\4\r\5\7\m\y\c\w\v\s\f\b\e\c\9\x\0\s\b\g\1\d\e\q\z\n\v\m\5\i\u\1\l\n ]] 00:07:17.512 00:07:17.512 real 0m1.469s 00:07:17.512 user 0m0.763s 00:07:17.512 sys 0m0.495s 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.512 ************************************ 00:07:17.512 END TEST dd_flag_nofollow 00:07:17.512 ************************************ 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:17.512 ************************************ 00:07:17.512 START TEST dd_flag_noatime 00:07:17.512 ************************************ 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1127 -- # noatime 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1731051275 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1731051275 00:07:17.512 07:34:35 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:18.448 07:34:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.708 [2024-11-08 07:34:36.438847] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:18.708 [2024-11-08 07:34:36.438945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60175 ] 00:07:18.708 [2024-11-08 07:34:36.594428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.708 [2024-11-08 07:34:36.657948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.968 [2024-11-08 07:34:36.705492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.968  [2024-11-08T07:34:36.929Z] Copying: 512/512 [B] (average 500 kBps) 00:07:18.968 00:07:18.968 07:34:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.968 07:34:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1731051275 )) 00:07:18.968 07:34:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.968 07:34:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1731051275 )) 00:07:18.968 07:34:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.228 [2024-11-08 07:34:36.958386] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:19.228 [2024-11-08 07:34:36.958717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60194 ] 00:07:19.228 [2024-11-08 07:34:37.107476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.228 [2024-11-08 07:34:37.156308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.487 [2024-11-08 07:34:37.197808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.487  [2024-11-08T07:34:37.448Z] Copying: 512/512 [B] (average 500 kBps) 00:07:19.487 00:07:19.487 07:34:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.487 ************************************ 00:07:19.487 END TEST dd_flag_noatime 00:07:19.487 ************************************ 00:07:19.487 07:34:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1731051277 )) 00:07:19.487 00:07:19.487 real 0m2.020s 00:07:19.487 user 0m0.522s 00:07:19.487 sys 0m0.500s 00:07:19.487 07:34:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:19.487 07:34:37 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:19.487 07:34:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:19.487 07:34:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:19.487 07:34:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:19.487 07:34:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:19.819 ************************************ 00:07:19.819 START TEST dd_flags_misc 00:07:19.819 ************************************ 00:07:19.819 07:34:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1127 -- # io 00:07:19.819 07:34:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:19.819 07:34:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:19.819 07:34:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:19.819 07:34:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:19.819 07:34:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:19.819 07:34:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:19.819 07:34:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:19.819 07:34:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:19.819 07:34:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:19.819 [2024-11-08 07:34:37.495804] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:19.819 [2024-11-08 07:34:37.495873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60217 ] 00:07:19.819 [2024-11-08 07:34:37.638709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.819 [2024-11-08 07:34:37.689966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.819 [2024-11-08 07:34:37.731566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.080  [2024-11-08T07:34:38.041Z] Copying: 512/512 [B] (average 500 kBps) 00:07:20.080 00:07:20.080 07:34:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8bhams98o0o15ntqdnd27w6ibuuvok6jgqvpgs3ll0kn68oamr4bnrgnsv0q14qqd5qu8kj3d2xi4zcdiwmzska4oewpvuu3b7j69bjrepk2bx19s0j6pwmddlcgdmfwakz79lhkh2zifx8wjt62c38gl9l7rl44n9wh9lkyf6lgh2rlr07cht93i7pg0evt4sli1h50rdpyb1adc0i3x253ydnoaztdza77qm7rul9oqjlgmtzcy7vrnl7n1euuztgncvu93shf7l8jn4fcmnmpkt4mxouon9t23l4om0jd8tpbb4dbxuiuoelm2beebaz1ik2g1oryr60gzjoynj41b0do9kd5itgfclhq9kfewg4dn0bracq4vsks8qrcvlgamltydxn5d0mt9aqkegrerm96b842g5vs0r12ai135n59f6zq7aj9qfd5eo55mpu8ngghbgld0alzi0a5fkj5bxhn282tkugafwx7rsn4qq1cptqhg1dkgvnyhq6c == \8\b\h\a\m\s\9\8\o\0\o\1\5\n\t\q\d\n\d\2\7\w\6\i\b\u\u\v\o\k\6\j\g\q\v\p\g\s\3\l\l\0\k\n\6\8\o\a\m\r\4\b\n\r\g\n\s\v\0\q\1\4\q\q\d\5\q\u\8\k\j\3\d\2\x\i\4\z\c\d\i\w\m\z\s\k\a\4\o\e\w\p\v\u\u\3\b\7\j\6\9\b\j\r\e\p\k\2\b\x\1\9\s\0\j\6\p\w\m\d\d\l\c\g\d\m\f\w\a\k\z\7\9\l\h\k\h\2\z\i\f\x\8\w\j\t\6\2\c\3\8\g\l\9\l\7\r\l\4\4\n\9\w\h\9\l\k\y\f\6\l\g\h\2\r\l\r\0\7\c\h\t\9\3\i\7\p\g\0\e\v\t\4\s\l\i\1\h\5\0\r\d\p\y\b\1\a\d\c\0\i\3\x\2\5\3\y\d\n\o\a\z\t\d\z\a\7\7\q\m\7\r\u\l\9\o\q\j\l\g\m\t\z\c\y\7\v\r\n\l\7\n\1\e\u\u\z\t\g\n\c\v\u\9\3\s\h\f\7\l\8\j\n\4\f\c\m\n\m\p\k\t\4\m\x\o\u\o\n\9\t\2\3\l\4\o\m\0\j\d\8\t\p\b\b\4\d\b\x\u\i\u\o\e\l\m\2\b\e\e\b\a\z\1\i\k\2\g\1\o\r\y\r\6\0\g\z\j\o\y\n\j\4\1\b\0\d\o\9\k\d\5\i\t\g\f\c\l\h\q\9\k\f\e\w\g\4\d\n\0\b\r\a\c\q\4\v\s\k\s\8\q\r\c\v\l\g\a\m\l\t\y\d\x\n\5\d\0\m\t\9\a\q\k\e\g\r\e\r\m\9\6\b\8\4\2\g\5\v\s\0\r\1\2\a\i\1\3\5\n\5\9\f\6\z\q\7\a\j\9\q\f\d\5\e\o\5\5\m\p\u\8\n\g\g\h\b\g\l\d\0\a\l\z\i\0\a\5\f\k\j\5\b\x\h\n\2\8\2\t\k\u\g\a\f\w\x\7\r\s\n\4\q\q\1\c\p\t\q\h\g\1\d\k\g\v\n\y\h\q\6\c ]] 00:07:20.080 07:34:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:20.080 07:34:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:20.080 [2024-11-08 07:34:37.968945] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:20.080 [2024-11-08 07:34:37.969068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60232 ] 00:07:20.339 [2024-11-08 07:34:38.112250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.339 [2024-11-08 07:34:38.159569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.339 [2024-11-08 07:34:38.201189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.339  [2024-11-08T07:34:38.559Z] Copying: 512/512 [B] (average 500 kBps) 00:07:20.598 00:07:20.598 07:34:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8bhams98o0o15ntqdnd27w6ibuuvok6jgqvpgs3ll0kn68oamr4bnrgnsv0q14qqd5qu8kj3d2xi4zcdiwmzska4oewpvuu3b7j69bjrepk2bx19s0j6pwmddlcgdmfwakz79lhkh2zifx8wjt62c38gl9l7rl44n9wh9lkyf6lgh2rlr07cht93i7pg0evt4sli1h50rdpyb1adc0i3x253ydnoaztdza77qm7rul9oqjlgmtzcy7vrnl7n1euuztgncvu93shf7l8jn4fcmnmpkt4mxouon9t23l4om0jd8tpbb4dbxuiuoelm2beebaz1ik2g1oryr60gzjoynj41b0do9kd5itgfclhq9kfewg4dn0bracq4vsks8qrcvlgamltydxn5d0mt9aqkegrerm96b842g5vs0r12ai135n59f6zq7aj9qfd5eo55mpu8ngghbgld0alzi0a5fkj5bxhn282tkugafwx7rsn4qq1cptqhg1dkgvnyhq6c == \8\b\h\a\m\s\9\8\o\0\o\1\5\n\t\q\d\n\d\2\7\w\6\i\b\u\u\v\o\k\6\j\g\q\v\p\g\s\3\l\l\0\k\n\6\8\o\a\m\r\4\b\n\r\g\n\s\v\0\q\1\4\q\q\d\5\q\u\8\k\j\3\d\2\x\i\4\z\c\d\i\w\m\z\s\k\a\4\o\e\w\p\v\u\u\3\b\7\j\6\9\b\j\r\e\p\k\2\b\x\1\9\s\0\j\6\p\w\m\d\d\l\c\g\d\m\f\w\a\k\z\7\9\l\h\k\h\2\z\i\f\x\8\w\j\t\6\2\c\3\8\g\l\9\l\7\r\l\4\4\n\9\w\h\9\l\k\y\f\6\l\g\h\2\r\l\r\0\7\c\h\t\9\3\i\7\p\g\0\e\v\t\4\s\l\i\1\h\5\0\r\d\p\y\b\1\a\d\c\0\i\3\x\2\5\3\y\d\n\o\a\z\t\d\z\a\7\7\q\m\7\r\u\l\9\o\q\j\l\g\m\t\z\c\y\7\v\r\n\l\7\n\1\e\u\u\z\t\g\n\c\v\u\9\3\s\h\f\7\l\8\j\n\4\f\c\m\n\m\p\k\t\4\m\x\o\u\o\n\9\t\2\3\l\4\o\m\0\j\d\8\t\p\b\b\4\d\b\x\u\i\u\o\e\l\m\2\b\e\e\b\a\z\1\i\k\2\g\1\o\r\y\r\6\0\g\z\j\o\y\n\j\4\1\b\0\d\o\9\k\d\5\i\t\g\f\c\l\h\q\9\k\f\e\w\g\4\d\n\0\b\r\a\c\q\4\v\s\k\s\8\q\r\c\v\l\g\a\m\l\t\y\d\x\n\5\d\0\m\t\9\a\q\k\e\g\r\e\r\m\9\6\b\8\4\2\g\5\v\s\0\r\1\2\a\i\1\3\5\n\5\9\f\6\z\q\7\a\j\9\q\f\d\5\e\o\5\5\m\p\u\8\n\g\g\h\b\g\l\d\0\a\l\z\i\0\a\5\f\k\j\5\b\x\h\n\2\8\2\t\k\u\g\a\f\w\x\7\r\s\n\4\q\q\1\c\p\t\q\h\g\1\d\k\g\v\n\y\h\q\6\c ]] 00:07:20.598 07:34:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:20.598 07:34:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:20.598 [2024-11-08 07:34:38.435487] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:20.598 [2024-11-08 07:34:38.435584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60236 ] 00:07:20.857 [2024-11-08 07:34:38.586822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.857 [2024-11-08 07:34:38.639787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.857 [2024-11-08 07:34:38.681237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.857  [2024-11-08T07:34:39.076Z] Copying: 512/512 [B] (average 500 kBps) 00:07:21.116 00:07:21.116 07:34:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8bhams98o0o15ntqdnd27w6ibuuvok6jgqvpgs3ll0kn68oamr4bnrgnsv0q14qqd5qu8kj3d2xi4zcdiwmzska4oewpvuu3b7j69bjrepk2bx19s0j6pwmddlcgdmfwakz79lhkh2zifx8wjt62c38gl9l7rl44n9wh9lkyf6lgh2rlr07cht93i7pg0evt4sli1h50rdpyb1adc0i3x253ydnoaztdza77qm7rul9oqjlgmtzcy7vrnl7n1euuztgncvu93shf7l8jn4fcmnmpkt4mxouon9t23l4om0jd8tpbb4dbxuiuoelm2beebaz1ik2g1oryr60gzjoynj41b0do9kd5itgfclhq9kfewg4dn0bracq4vsks8qrcvlgamltydxn5d0mt9aqkegrerm96b842g5vs0r12ai135n59f6zq7aj9qfd5eo55mpu8ngghbgld0alzi0a5fkj5bxhn282tkugafwx7rsn4qq1cptqhg1dkgvnyhq6c == \8\b\h\a\m\s\9\8\o\0\o\1\5\n\t\q\d\n\d\2\7\w\6\i\b\u\u\v\o\k\6\j\g\q\v\p\g\s\3\l\l\0\k\n\6\8\o\a\m\r\4\b\n\r\g\n\s\v\0\q\1\4\q\q\d\5\q\u\8\k\j\3\d\2\x\i\4\z\c\d\i\w\m\z\s\k\a\4\o\e\w\p\v\u\u\3\b\7\j\6\9\b\j\r\e\p\k\2\b\x\1\9\s\0\j\6\p\w\m\d\d\l\c\g\d\m\f\w\a\k\z\7\9\l\h\k\h\2\z\i\f\x\8\w\j\t\6\2\c\3\8\g\l\9\l\7\r\l\4\4\n\9\w\h\9\l\k\y\f\6\l\g\h\2\r\l\r\0\7\c\h\t\9\3\i\7\p\g\0\e\v\t\4\s\l\i\1\h\5\0\r\d\p\y\b\1\a\d\c\0\i\3\x\2\5\3\y\d\n\o\a\z\t\d\z\a\7\7\q\m\7\r\u\l\9\o\q\j\l\g\m\t\z\c\y\7\v\r\n\l\7\n\1\e\u\u\z\t\g\n\c\v\u\9\3\s\h\f\7\l\8\j\n\4\f\c\m\n\m\p\k\t\4\m\x\o\u\o\n\9\t\2\3\l\4\o\m\0\j\d\8\t\p\b\b\4\d\b\x\u\i\u\o\e\l\m\2\b\e\e\b\a\z\1\i\k\2\g\1\o\r\y\r\6\0\g\z\j\o\y\n\j\4\1\b\0\d\o\9\k\d\5\i\t\g\f\c\l\h\q\9\k\f\e\w\g\4\d\n\0\b\r\a\c\q\4\v\s\k\s\8\q\r\c\v\l\g\a\m\l\t\y\d\x\n\5\d\0\m\t\9\a\q\k\e\g\r\e\r\m\9\6\b\8\4\2\g\5\v\s\0\r\1\2\a\i\1\3\5\n\5\9\f\6\z\q\7\a\j\9\q\f\d\5\e\o\5\5\m\p\u\8\n\g\g\h\b\g\l\d\0\a\l\z\i\0\a\5\f\k\j\5\b\x\h\n\2\8\2\t\k\u\g\a\f\w\x\7\r\s\n\4\q\q\1\c\p\t\q\h\g\1\d\k\g\v\n\y\h\q\6\c ]] 00:07:21.116 07:34:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:21.116 07:34:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:21.116 [2024-11-08 07:34:38.897677] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:21.116 [2024-11-08 07:34:38.897749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60253 ] 00:07:21.116 [2024-11-08 07:34:39.037230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.373 [2024-11-08 07:34:39.083900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.373 [2024-11-08 07:34:39.125263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.373  [2024-11-08T07:34:39.334Z] Copying: 512/512 [B] (average 250 kBps) 00:07:21.373 00:07:21.373 07:34:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8bhams98o0o15ntqdnd27w6ibuuvok6jgqvpgs3ll0kn68oamr4bnrgnsv0q14qqd5qu8kj3d2xi4zcdiwmzska4oewpvuu3b7j69bjrepk2bx19s0j6pwmddlcgdmfwakz79lhkh2zifx8wjt62c38gl9l7rl44n9wh9lkyf6lgh2rlr07cht93i7pg0evt4sli1h50rdpyb1adc0i3x253ydnoaztdza77qm7rul9oqjlgmtzcy7vrnl7n1euuztgncvu93shf7l8jn4fcmnmpkt4mxouon9t23l4om0jd8tpbb4dbxuiuoelm2beebaz1ik2g1oryr60gzjoynj41b0do9kd5itgfclhq9kfewg4dn0bracq4vsks8qrcvlgamltydxn5d0mt9aqkegrerm96b842g5vs0r12ai135n59f6zq7aj9qfd5eo55mpu8ngghbgld0alzi0a5fkj5bxhn282tkugafwx7rsn4qq1cptqhg1dkgvnyhq6c == \8\b\h\a\m\s\9\8\o\0\o\1\5\n\t\q\d\n\d\2\7\w\6\i\b\u\u\v\o\k\6\j\g\q\v\p\g\s\3\l\l\0\k\n\6\8\o\a\m\r\4\b\n\r\g\n\s\v\0\q\1\4\q\q\d\5\q\u\8\k\j\3\d\2\x\i\4\z\c\d\i\w\m\z\s\k\a\4\o\e\w\p\v\u\u\3\b\7\j\6\9\b\j\r\e\p\k\2\b\x\1\9\s\0\j\6\p\w\m\d\d\l\c\g\d\m\f\w\a\k\z\7\9\l\h\k\h\2\z\i\f\x\8\w\j\t\6\2\c\3\8\g\l\9\l\7\r\l\4\4\n\9\w\h\9\l\k\y\f\6\l\g\h\2\r\l\r\0\7\c\h\t\9\3\i\7\p\g\0\e\v\t\4\s\l\i\1\h\5\0\r\d\p\y\b\1\a\d\c\0\i\3\x\2\5\3\y\d\n\o\a\z\t\d\z\a\7\7\q\m\7\r\u\l\9\o\q\j\l\g\m\t\z\c\y\7\v\r\n\l\7\n\1\e\u\u\z\t\g\n\c\v\u\9\3\s\h\f\7\l\8\j\n\4\f\c\m\n\m\p\k\t\4\m\x\o\u\o\n\9\t\2\3\l\4\o\m\0\j\d\8\t\p\b\b\4\d\b\x\u\i\u\o\e\l\m\2\b\e\e\b\a\z\1\i\k\2\g\1\o\r\y\r\6\0\g\z\j\o\y\n\j\4\1\b\0\d\o\9\k\d\5\i\t\g\f\c\l\h\q\9\k\f\e\w\g\4\d\n\0\b\r\a\c\q\4\v\s\k\s\8\q\r\c\v\l\g\a\m\l\t\y\d\x\n\5\d\0\m\t\9\a\q\k\e\g\r\e\r\m\9\6\b\8\4\2\g\5\v\s\0\r\1\2\a\i\1\3\5\n\5\9\f\6\z\q\7\a\j\9\q\f\d\5\e\o\5\5\m\p\u\8\n\g\g\h\b\g\l\d\0\a\l\z\i\0\a\5\f\k\j\5\b\x\h\n\2\8\2\t\k\u\g\a\f\w\x\7\r\s\n\4\q\q\1\c\p\t\q\h\g\1\d\k\g\v\n\y\h\q\6\c ]] 00:07:21.373 07:34:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:21.373 07:34:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:21.373 07:34:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:21.373 07:34:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:21.373 07:34:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:21.373 07:34:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:22.491 [2024-11-08 07:34:39.378177] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:22.491 [2024-11-08 07:34:39.378296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60257 ] 00:07:22.491 [2024-11-08 07:34:39.526887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.491 [2024-11-08 07:34:39.578429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.491 [2024-11-08 07:34:39.619492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.491  [2024-11-08T07:34:40.452Z] Copying: 512/512 [B] (average 500 kBps) 00:07:22.491 00:07:22.491 07:34:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ j0dsy56v0rquod3h2gliyj22wtbh5sav5tin0dazxxxsyivb6tqpkvw0y8ghzj2u8funyx7qhcxkmks7myonnk4t35ge2ls0eeknr8vpa94r5zbb4h6cxuzx9hy9vmuit4v9fm9saaf7ro2busukzimp053swtnk492lnlsifb0x7dkzxgpvnroag4adcc4l2xmuacarn4k33h5a8phczkm3wexdwmke2quzd2f43uecq2eyvz5q2abksmrvd81fihhnm322q156rttq40a78klwgkdxc2z8moul8mmmbcgp2jk6bh96z1tvv0p8n0g54y7ozg6hne8usbijs4pb493eb1p8kxy4q6ya9ucejr5f2r202jkf3m0kajayvh0dpx2qwvi8dqydhtrh2b5m8pdoeivekpeebgivzio403ldiulgfbuxbibt8a72mnwevyslp4qlal2vpzsuihzlpbik28358ls7js82vblr8xu9k1g2ccmzpqwatlp4sohn == \j\0\d\s\y\5\6\v\0\r\q\u\o\d\3\h\2\g\l\i\y\j\2\2\w\t\b\h\5\s\a\v\5\t\i\n\0\d\a\z\x\x\x\s\y\i\v\b\6\t\q\p\k\v\w\0\y\8\g\h\z\j\2\u\8\f\u\n\y\x\7\q\h\c\x\k\m\k\s\7\m\y\o\n\n\k\4\t\3\5\g\e\2\l\s\0\e\e\k\n\r\8\v\p\a\9\4\r\5\z\b\b\4\h\6\c\x\u\z\x\9\h\y\9\v\m\u\i\t\4\v\9\f\m\9\s\a\a\f\7\r\o\2\b\u\s\u\k\z\i\m\p\0\5\3\s\w\t\n\k\4\9\2\l\n\l\s\i\f\b\0\x\7\d\k\z\x\g\p\v\n\r\o\a\g\4\a\d\c\c\4\l\2\x\m\u\a\c\a\r\n\4\k\3\3\h\5\a\8\p\h\c\z\k\m\3\w\e\x\d\w\m\k\e\2\q\u\z\d\2\f\4\3\u\e\c\q\2\e\y\v\z\5\q\2\a\b\k\s\m\r\v\d\8\1\f\i\h\h\n\m\3\2\2\q\1\5\6\r\t\t\q\4\0\a\7\8\k\l\w\g\k\d\x\c\2\z\8\m\o\u\l\8\m\m\m\b\c\g\p\2\j\k\6\b\h\9\6\z\1\t\v\v\0\p\8\n\0\g\5\4\y\7\o\z\g\6\h\n\e\8\u\s\b\i\j\s\4\p\b\4\9\3\e\b\1\p\8\k\x\y\4\q\6\y\a\9\u\c\e\j\r\5\f\2\r\2\0\2\j\k\f\3\m\0\k\a\j\a\y\v\h\0\d\p\x\2\q\w\v\i\8\d\q\y\d\h\t\r\h\2\b\5\m\8\p\d\o\e\i\v\e\k\p\e\e\b\g\i\v\z\i\o\4\0\3\l\d\i\u\l\g\f\b\u\x\b\i\b\t\8\a\7\2\m\n\w\e\v\y\s\l\p\4\q\l\a\l\2\v\p\z\s\u\i\h\z\l\p\b\i\k\2\8\3\5\8\l\s\7\j\s\8\2\v\b\l\r\8\x\u\9\k\1\g\2\c\c\m\z\p\q\w\a\t\l\p\4\s\o\h\n ]] 00:07:22.491 07:34:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:22.491 07:34:39 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:22.491 [2024-11-08 07:34:39.850626] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:22.491 [2024-11-08 07:34:39.850718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60271 ] 00:07:22.491 [2024-11-08 07:34:39.998762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.491 [2024-11-08 07:34:40.053137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.491 [2024-11-08 07:34:40.094881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.491  [2024-11-08T07:34:40.452Z] Copying: 512/512 [B] (average 500 kBps) 00:07:22.491 00:07:22.492 07:34:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ j0dsy56v0rquod3h2gliyj22wtbh5sav5tin0dazxxxsyivb6tqpkvw0y8ghzj2u8funyx7qhcxkmks7myonnk4t35ge2ls0eeknr8vpa94r5zbb4h6cxuzx9hy9vmuit4v9fm9saaf7ro2busukzimp053swtnk492lnlsifb0x7dkzxgpvnroag4adcc4l2xmuacarn4k33h5a8phczkm3wexdwmke2quzd2f43uecq2eyvz5q2abksmrvd81fihhnm322q156rttq40a78klwgkdxc2z8moul8mmmbcgp2jk6bh96z1tvv0p8n0g54y7ozg6hne8usbijs4pb493eb1p8kxy4q6ya9ucejr5f2r202jkf3m0kajayvh0dpx2qwvi8dqydhtrh2b5m8pdoeivekpeebgivzio403ldiulgfbuxbibt8a72mnwevyslp4qlal2vpzsuihzlpbik28358ls7js82vblr8xu9k1g2ccmzpqwatlp4sohn == \j\0\d\s\y\5\6\v\0\r\q\u\o\d\3\h\2\g\l\i\y\j\2\2\w\t\b\h\5\s\a\v\5\t\i\n\0\d\a\z\x\x\x\s\y\i\v\b\6\t\q\p\k\v\w\0\y\8\g\h\z\j\2\u\8\f\u\n\y\x\7\q\h\c\x\k\m\k\s\7\m\y\o\n\n\k\4\t\3\5\g\e\2\l\s\0\e\e\k\n\r\8\v\p\a\9\4\r\5\z\b\b\4\h\6\c\x\u\z\x\9\h\y\9\v\m\u\i\t\4\v\9\f\m\9\s\a\a\f\7\r\o\2\b\u\s\u\k\z\i\m\p\0\5\3\s\w\t\n\k\4\9\2\l\n\l\s\i\f\b\0\x\7\d\k\z\x\g\p\v\n\r\o\a\g\4\a\d\c\c\4\l\2\x\m\u\a\c\a\r\n\4\k\3\3\h\5\a\8\p\h\c\z\k\m\3\w\e\x\d\w\m\k\e\2\q\u\z\d\2\f\4\3\u\e\c\q\2\e\y\v\z\5\q\2\a\b\k\s\m\r\v\d\8\1\f\i\h\h\n\m\3\2\2\q\1\5\6\r\t\t\q\4\0\a\7\8\k\l\w\g\k\d\x\c\2\z\8\m\o\u\l\8\m\m\m\b\c\g\p\2\j\k\6\b\h\9\6\z\1\t\v\v\0\p\8\n\0\g\5\4\y\7\o\z\g\6\h\n\e\8\u\s\b\i\j\s\4\p\b\4\9\3\e\b\1\p\8\k\x\y\4\q\6\y\a\9\u\c\e\j\r\5\f\2\r\2\0\2\j\k\f\3\m\0\k\a\j\a\y\v\h\0\d\p\x\2\q\w\v\i\8\d\q\y\d\h\t\r\h\2\b\5\m\8\p\d\o\e\i\v\e\k\p\e\e\b\g\i\v\z\i\o\4\0\3\l\d\i\u\l\g\f\b\u\x\b\i\b\t\8\a\7\2\m\n\w\e\v\y\s\l\p\4\q\l\a\l\2\v\p\z\s\u\i\h\z\l\p\b\i\k\2\8\3\5\8\l\s\7\j\s\8\2\v\b\l\r\8\x\u\9\k\1\g\2\c\c\m\z\p\q\w\a\t\l\p\4\s\o\h\n ]] 00:07:22.492 07:34:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:22.492 07:34:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:22.492 [2024-11-08 07:34:40.329327] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:22.492 [2024-11-08 07:34:40.329424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60276 ] 00:07:22.753 [2024-11-08 07:34:40.479501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.753 [2024-11-08 07:34:40.532401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.753 [2024-11-08 07:34:40.573613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.753  [2024-11-08T07:34:40.972Z] Copying: 512/512 [B] (average 166 kBps) 00:07:23.011 00:07:23.011 07:34:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ j0dsy56v0rquod3h2gliyj22wtbh5sav5tin0dazxxxsyivb6tqpkvw0y8ghzj2u8funyx7qhcxkmks7myonnk4t35ge2ls0eeknr8vpa94r5zbb4h6cxuzx9hy9vmuit4v9fm9saaf7ro2busukzimp053swtnk492lnlsifb0x7dkzxgpvnroag4adcc4l2xmuacarn4k33h5a8phczkm3wexdwmke2quzd2f43uecq2eyvz5q2abksmrvd81fihhnm322q156rttq40a78klwgkdxc2z8moul8mmmbcgp2jk6bh96z1tvv0p8n0g54y7ozg6hne8usbijs4pb493eb1p8kxy4q6ya9ucejr5f2r202jkf3m0kajayvh0dpx2qwvi8dqydhtrh2b5m8pdoeivekpeebgivzio403ldiulgfbuxbibt8a72mnwevyslp4qlal2vpzsuihzlpbik28358ls7js82vblr8xu9k1g2ccmzpqwatlp4sohn == \j\0\d\s\y\5\6\v\0\r\q\u\o\d\3\h\2\g\l\i\y\j\2\2\w\t\b\h\5\s\a\v\5\t\i\n\0\d\a\z\x\x\x\s\y\i\v\b\6\t\q\p\k\v\w\0\y\8\g\h\z\j\2\u\8\f\u\n\y\x\7\q\h\c\x\k\m\k\s\7\m\y\o\n\n\k\4\t\3\5\g\e\2\l\s\0\e\e\k\n\r\8\v\p\a\9\4\r\5\z\b\b\4\h\6\c\x\u\z\x\9\h\y\9\v\m\u\i\t\4\v\9\f\m\9\s\a\a\f\7\r\o\2\b\u\s\u\k\z\i\m\p\0\5\3\s\w\t\n\k\4\9\2\l\n\l\s\i\f\b\0\x\7\d\k\z\x\g\p\v\n\r\o\a\g\4\a\d\c\c\4\l\2\x\m\u\a\c\a\r\n\4\k\3\3\h\5\a\8\p\h\c\z\k\m\3\w\e\x\d\w\m\k\e\2\q\u\z\d\2\f\4\3\u\e\c\q\2\e\y\v\z\5\q\2\a\b\k\s\m\r\v\d\8\1\f\i\h\h\n\m\3\2\2\q\1\5\6\r\t\t\q\4\0\a\7\8\k\l\w\g\k\d\x\c\2\z\8\m\o\u\l\8\m\m\m\b\c\g\p\2\j\k\6\b\h\9\6\z\1\t\v\v\0\p\8\n\0\g\5\4\y\7\o\z\g\6\h\n\e\8\u\s\b\i\j\s\4\p\b\4\9\3\e\b\1\p\8\k\x\y\4\q\6\y\a\9\u\c\e\j\r\5\f\2\r\2\0\2\j\k\f\3\m\0\k\a\j\a\y\v\h\0\d\p\x\2\q\w\v\i\8\d\q\y\d\h\t\r\h\2\b\5\m\8\p\d\o\e\i\v\e\k\p\e\e\b\g\i\v\z\i\o\4\0\3\l\d\i\u\l\g\f\b\u\x\b\i\b\t\8\a\7\2\m\n\w\e\v\y\s\l\p\4\q\l\a\l\2\v\p\z\s\u\i\h\z\l\p\b\i\k\2\8\3\5\8\l\s\7\j\s\8\2\v\b\l\r\8\x\u\9\k\1\g\2\c\c\m\z\p\q\w\a\t\l\p\4\s\o\h\n ]] 00:07:23.011 07:34:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:23.011 07:34:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:23.011 [2024-11-08 07:34:40.808157] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:23.011 [2024-11-08 07:34:40.808260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60285 ] 00:07:23.011 [2024-11-08 07:34:40.957301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.270 [2024-11-08 07:34:41.006612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.270 [2024-11-08 07:34:41.048253] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.270  [2024-11-08T07:34:41.231Z] Copying: 512/512 [B] (average 250 kBps) 00:07:23.270 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ j0dsy56v0rquod3h2gliyj22wtbh5sav5tin0dazxxxsyivb6tqpkvw0y8ghzj2u8funyx7qhcxkmks7myonnk4t35ge2ls0eeknr8vpa94r5zbb4h6cxuzx9hy9vmuit4v9fm9saaf7ro2busukzimp053swtnk492lnlsifb0x7dkzxgpvnroag4adcc4l2xmuacarn4k33h5a8phczkm3wexdwmke2quzd2f43uecq2eyvz5q2abksmrvd81fihhnm322q156rttq40a78klwgkdxc2z8moul8mmmbcgp2jk6bh96z1tvv0p8n0g54y7ozg6hne8usbijs4pb493eb1p8kxy4q6ya9ucejr5f2r202jkf3m0kajayvh0dpx2qwvi8dqydhtrh2b5m8pdoeivekpeebgivzio403ldiulgfbuxbibt8a72mnwevyslp4qlal2vpzsuihzlpbik28358ls7js82vblr8xu9k1g2ccmzpqwatlp4sohn == \j\0\d\s\y\5\6\v\0\r\q\u\o\d\3\h\2\g\l\i\y\j\2\2\w\t\b\h\5\s\a\v\5\t\i\n\0\d\a\z\x\x\x\s\y\i\v\b\6\t\q\p\k\v\w\0\y\8\g\h\z\j\2\u\8\f\u\n\y\x\7\q\h\c\x\k\m\k\s\7\m\y\o\n\n\k\4\t\3\5\g\e\2\l\s\0\e\e\k\n\r\8\v\p\a\9\4\r\5\z\b\b\4\h\6\c\x\u\z\x\9\h\y\9\v\m\u\i\t\4\v\9\f\m\9\s\a\a\f\7\r\o\2\b\u\s\u\k\z\i\m\p\0\5\3\s\w\t\n\k\4\9\2\l\n\l\s\i\f\b\0\x\7\d\k\z\x\g\p\v\n\r\o\a\g\4\a\d\c\c\4\l\2\x\m\u\a\c\a\r\n\4\k\3\3\h\5\a\8\p\h\c\z\k\m\3\w\e\x\d\w\m\k\e\2\q\u\z\d\2\f\4\3\u\e\c\q\2\e\y\v\z\5\q\2\a\b\k\s\m\r\v\d\8\1\f\i\h\h\n\m\3\2\2\q\1\5\6\r\t\t\q\4\0\a\7\8\k\l\w\g\k\d\x\c\2\z\8\m\o\u\l\8\m\m\m\b\c\g\p\2\j\k\6\b\h\9\6\z\1\t\v\v\0\p\8\n\0\g\5\4\y\7\o\z\g\6\h\n\e\8\u\s\b\i\j\s\4\p\b\4\9\3\e\b\1\p\8\k\x\y\4\q\6\y\a\9\u\c\e\j\r\5\f\2\r\2\0\2\j\k\f\3\m\0\k\a\j\a\y\v\h\0\d\p\x\2\q\w\v\i\8\d\q\y\d\h\t\r\h\2\b\5\m\8\p\d\o\e\i\v\e\k\p\e\e\b\g\i\v\z\i\o\4\0\3\l\d\i\u\l\g\f\b\u\x\b\i\b\t\8\a\7\2\m\n\w\e\v\y\s\l\p\4\q\l\a\l\2\v\p\z\s\u\i\h\z\l\p\b\i\k\2\8\3\5\8\l\s\7\j\s\8\2\v\b\l\r\8\x\u\9\k\1\g\2\c\c\m\z\p\q\w\a\t\l\p\4\s\o\h\n ]] 00:07:23.530 00:07:23.530 real 0m3.785s 00:07:23.530 user 0m1.970s 00:07:23.530 sys 0m1.824s 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:23.530 ************************************ 00:07:23.530 END TEST dd_flags_misc 00:07:23.530 ************************************ 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:23.530 * Second test run, disabling liburing, forcing AIO 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:23.530 ************************************ 00:07:23.530 START TEST dd_flag_append_forced_aio 00:07:23.530 ************************************ 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1127 -- # append 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=zh430euuy33tszh1z3mordjdqjb3duxe 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=nz6tjp2t590pcduiubl20rpfjc0fizxi 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s zh430euuy33tszh1z3mordjdqjb3duxe 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s nz6tjp2t590pcduiubl20rpfjc0fizxi 00:07:23.530 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:23.530 [2024-11-08 07:34:41.342782] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:23.530 [2024-11-08 07:34:41.342851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60314 ] 00:07:23.530 [2024-11-08 07:34:41.479578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.789 [2024-11-08 07:34:41.529372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.789 [2024-11-08 07:34:41.570738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.789  [2024-11-08T07:34:42.009Z] Copying: 32/32 [B] (average 31 kBps) 00:07:24.048 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ nz6tjp2t590pcduiubl20rpfjc0fizxizh430euuy33tszh1z3mordjdqjb3duxe == \n\z\6\t\j\p\2\t\5\9\0\p\c\d\u\i\u\b\l\2\0\r\p\f\j\c\0\f\i\z\x\i\z\h\4\3\0\e\u\u\y\3\3\t\s\z\h\1\z\3\m\o\r\d\j\d\q\j\b\3\d\u\x\e ]] 00:07:24.048 00:07:24.048 real 0m0.473s 00:07:24.048 user 0m0.239s 00:07:24.048 sys 0m0.115s 00:07:24.048 ************************************ 00:07:24.048 END TEST dd_flag_append_forced_aio 00:07:24.048 ************************************ 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:24.048 ************************************ 00:07:24.048 START TEST dd_flag_directory_forced_aio 00:07:24.048 ************************************ 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1127 -- # directory 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:24.048 07:34:41 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.048 [2024-11-08 07:34:41.879530] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:24.048 [2024-11-08 07:34:41.879624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60340 ] 00:07:24.306 [2024-11-08 07:34:42.028195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.306 [2024-11-08 07:34:42.081246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.306 [2024-11-08 07:34:42.122820] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.306 [2024-11-08 07:34:42.152679] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:24.306 [2024-11-08 07:34:42.152731] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:24.306 [2024-11-08 07:34:42.152748] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.306 [2024-11-08 07:34:42.247847] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:24.565 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:24.565 [2024-11-08 07:34:42.365085] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:24.565 [2024-11-08 07:34:42.365191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60350 ] 00:07:24.565 [2024-11-08 07:34:42.514763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.824 [2024-11-08 07:34:42.568045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.824 [2024-11-08 07:34:42.609453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.824 [2024-11-08 07:34:42.637957] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:24.824 [2024-11-08 07:34:42.638016] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:24.824 [2024-11-08 07:34:42.638033] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.824 [2024-11-08 07:34:42.731917] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.084 00:07:25.084 real 0m0.969s 00:07:25.084 user 0m0.500s 00:07:25.084 sys 0m0.261s 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:25.084 ************************************ 00:07:25.084 END TEST dd_flag_directory_forced_aio 00:07:25.084 ************************************ 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:25.084 ************************************ 00:07:25.084 START TEST dd_flag_nofollow_forced_aio 00:07:25.084 ************************************ 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1127 -- # nofollow 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.084 07:34:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.084 [2024-11-08 07:34:42.915357] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:25.084 [2024-11-08 07:34:42.915453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60378 ] 00:07:25.344 [2024-11-08 07:34:43.059194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.344 [2024-11-08 07:34:43.113919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.344 [2024-11-08 07:34:43.155563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.344 [2024-11-08 07:34:43.183460] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:25.344 [2024-11-08 07:34:43.183511] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:25.344 [2024-11-08 07:34:43.183529] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.344 [2024-11-08 07:34:43.276667] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.602 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:25.602 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.602 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:25.602 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:25.602 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:25.602 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.603 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:25.603 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:25.603 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:25.603 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.603 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.603 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.603 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.603 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.603 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.603 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.603 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.603 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:25.603 [2024-11-08 07:34:43.396261] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:25.603 [2024-11-08 07:34:43.396364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60388 ] 00:07:25.603 [2024-11-08 07:34:43.546623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.862 [2024-11-08 07:34:43.599682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.862 [2024-11-08 07:34:43.640972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.862 [2024-11-08 07:34:43.669058] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:25.862 [2024-11-08 07:34:43.669105] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:25.862 [2024-11-08 07:34:43.669123] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.862 [2024-11-08 07:34:43.763010] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.862 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:25.862 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.862 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:26.120 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:26.121 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:26.121 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:26.121 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:26.121 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:26.121 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:26.121 07:34:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.121 [2024-11-08 07:34:43.891754] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:26.121 [2024-11-08 07:34:43.891846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60390 ] 00:07:26.121 [2024-11-08 07:34:44.041633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.380 [2024-11-08 07:34:44.096654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.380 [2024-11-08 07:34:44.146342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.380  [2024-11-08T07:34:44.600Z] Copying: 512/512 [B] (average 500 kBps) 00:07:26.639 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 35rngpqhew9r8vp2toanqd4oe0727rnvyw6t2a7d3ys1dp0qbn27xvb20qzxl0enf56v5hkmb1vrijtselbamklkwymbtxuejuglmuu79prq7aao3gdas129yeivrwbt7h9bzh2tgwuna48ulv2d9z678qwvy6zpe4q9ah62yrrr2l49i5nxhipuglgq93a3gafmu990h3348ee84yzj1bqf9u1kkgoglyvhypirzlw34pxbhtv4mtnf2g2h0i08dfrxk2xzt6gpf0fw76xlrgrom7v2oaxrrfvfnig1o4js21w41gw4alflcho2z0lghrg78d20toum0s55n4p7j4dvg1qthf0saqb8p249gzwen7xwpxu9xka6i4jahjb8qiuvhf0bb2jttuafu5ulgg98rov5cm46alktiw8f9jh965cxam1696scmdg293vedzrbd8duhmq2r12p2mz2yvd2m1px0o23ixho2xke9j189wjrqf8e3k85fh5mpttt == \3\5\r\n\g\p\q\h\e\w\9\r\8\v\p\2\t\o\a\n\q\d\4\o\e\0\7\2\7\r\n\v\y\w\6\t\2\a\7\d\3\y\s\1\d\p\0\q\b\n\2\7\x\v\b\2\0\q\z\x\l\0\e\n\f\5\6\v\5\h\k\m\b\1\v\r\i\j\t\s\e\l\b\a\m\k\l\k\w\y\m\b\t\x\u\e\j\u\g\l\m\u\u\7\9\p\r\q\7\a\a\o\3\g\d\a\s\1\2\9\y\e\i\v\r\w\b\t\7\h\9\b\z\h\2\t\g\w\u\n\a\4\8\u\l\v\2\d\9\z\6\7\8\q\w\v\y\6\z\p\e\4\q\9\a\h\6\2\y\r\r\r\2\l\4\9\i\5\n\x\h\i\p\u\g\l\g\q\9\3\a\3\g\a\f\m\u\9\9\0\h\3\3\4\8\e\e\8\4\y\z\j\1\b\q\f\9\u\1\k\k\g\o\g\l\y\v\h\y\p\i\r\z\l\w\3\4\p\x\b\h\t\v\4\m\t\n\f\2\g\2\h\0\i\0\8\d\f\r\x\k\2\x\z\t\6\g\p\f\0\f\w\7\6\x\l\r\g\r\o\m\7\v\2\o\a\x\r\r\f\v\f\n\i\g\1\o\4\j\s\2\1\w\4\1\g\w\4\a\l\f\l\c\h\o\2\z\0\l\g\h\r\g\7\8\d\2\0\t\o\u\m\0\s\5\5\n\4\p\7\j\4\d\v\g\1\q\t\h\f\0\s\a\q\b\8\p\2\4\9\g\z\w\e\n\7\x\w\p\x\u\9\x\k\a\6\i\4\j\a\h\j\b\8\q\i\u\v\h\f\0\b\b\2\j\t\t\u\a\f\u\5\u\l\g\g\9\8\r\o\v\5\c\m\4\6\a\l\k\t\i\w\8\f\9\j\h\9\6\5\c\x\a\m\1\6\9\6\s\c\m\d\g\2\9\3\v\e\d\z\r\b\d\8\d\u\h\m\q\2\r\1\2\p\2\m\z\2\y\v\d\2\m\1\p\x\0\o\2\3\i\x\h\o\2\x\k\e\9\j\1\8\9\w\j\r\q\f\8\e\3\k\8\5\f\h\5\m\p\t\t\t ]] 00:07:26.639 00:07:26.639 real 0m1.501s 00:07:26.639 user 0m0.770s 00:07:26.639 sys 0m0.404s 00:07:26.639 ************************************ 00:07:26.639 END TEST dd_flag_nofollow_forced_aio 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:26.639 ************************************ 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:26.639 ************************************ 00:07:26.639 START TEST dd_flag_noatime_forced_aio 00:07:26.639 ************************************ 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1127 -- # noatime 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1731051284 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1731051284 00:07:26.639 07:34:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:27.577 07:34:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.577 [2024-11-08 07:34:45.496480] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:27.577 [2024-11-08 07:34:45.496585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60436 ] 00:07:27.836 [2024-11-08 07:34:45.647445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.836 [2024-11-08 07:34:45.698115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.836 [2024-11-08 07:34:45.739562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.836  [2024-11-08T07:34:46.056Z] Copying: 512/512 [B] (average 500 kBps) 00:07:28.095 00:07:28.095 07:34:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.095 07:34:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1731051284 )) 00:07:28.095 07:34:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.095 07:34:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1731051284 )) 00:07:28.095 07:34:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.095 [2024-11-08 07:34:45.992848] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:28.095 [2024-11-08 07:34:45.992919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60447 ] 00:07:28.354 [2024-11-08 07:34:46.131958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.354 [2024-11-08 07:34:46.184019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.354 [2024-11-08 07:34:46.225593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.354  [2024-11-08T07:34:46.574Z] Copying: 512/512 [B] (average 500 kBps) 00:07:28.613 00:07:28.613 07:34:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.613 07:34:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1731051286 )) 00:07:28.613 00:07:28.613 real 0m2.020s 00:07:28.613 user 0m0.508s 00:07:28.613 sys 0m0.272s 00:07:28.613 07:34:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:28.613 07:34:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:28.613 ************************************ 00:07:28.613 END TEST dd_flag_noatime_forced_aio 00:07:28.613 ************************************ 00:07:28.613 07:34:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:28.613 07:34:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:28.613 07:34:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:28.613 07:34:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:28.614 ************************************ 00:07:28.614 START TEST dd_flags_misc_forced_aio 00:07:28.614 ************************************ 00:07:28.614 07:34:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1127 -- # io 00:07:28.614 07:34:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:28.614 07:34:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:28.614 07:34:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:28.614 07:34:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:28.614 07:34:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:28.614 07:34:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:28.614 07:34:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:28.614 07:34:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:28.614 07:34:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:28.614 [2024-11-08 07:34:46.562781] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:28.614 [2024-11-08 07:34:46.562878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60474 ] 00:07:28.873 [2024-11-08 07:34:46.712191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.873 [2024-11-08 07:34:46.763606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.873 [2024-11-08 07:34:46.805523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.132  [2024-11-08T07:34:47.093Z] Copying: 512/512 [B] (average 500 kBps) 00:07:29.132 00:07:29.132 07:34:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jursyt643xbkr9mfn0i39ec8ysab8ur12skw2zoqom1yj94lrtp5w5uc2btsze8yt5ayxx034hi9v3uq8ig9kyi1doh46gduuvirai0jnn0udlcivaid04l8w2y746rh5u6vv5mer4a0yie5elxp7x8pz4p5l9ihn86uw6w6nde2idvskl04zc7k04fe3ulmflv8m26o609zv5scn9yylkaz0m4r4bufegsjsl8wrc1763756fmv3d35ookk73elvace6ys7zlttq8jhhltzt4qnsmip5p3tx87244bialdzy76hfypxbnxysknw5yrckmeyr09l5h67s7v70x9m8bv1e1ikcsc2a1aikg3lrlzxx1p1exdaj2dm6ilnm57urw3k1dtps7zpgr2kn8zdchg9r7xouyfylnhbtutpyj3b4lsimunsa8fp9w306pqxowsw3bk2n8yw4dkft4g0i56ooc5xg2yn4jga0m7bvj60ggz7lzmg67ltfqe65sk7 == \j\u\r\s\y\t\6\4\3\x\b\k\r\9\m\f\n\0\i\3\9\e\c\8\y\s\a\b\8\u\r\1\2\s\k\w\2\z\o\q\o\m\1\y\j\9\4\l\r\t\p\5\w\5\u\c\2\b\t\s\z\e\8\y\t\5\a\y\x\x\0\3\4\h\i\9\v\3\u\q\8\i\g\9\k\y\i\1\d\o\h\4\6\g\d\u\u\v\i\r\a\i\0\j\n\n\0\u\d\l\c\i\v\a\i\d\0\4\l\8\w\2\y\7\4\6\r\h\5\u\6\v\v\5\m\e\r\4\a\0\y\i\e\5\e\l\x\p\7\x\8\p\z\4\p\5\l\9\i\h\n\8\6\u\w\6\w\6\n\d\e\2\i\d\v\s\k\l\0\4\z\c\7\k\0\4\f\e\3\u\l\m\f\l\v\8\m\2\6\o\6\0\9\z\v\5\s\c\n\9\y\y\l\k\a\z\0\m\4\r\4\b\u\f\e\g\s\j\s\l\8\w\r\c\1\7\6\3\7\5\6\f\m\v\3\d\3\5\o\o\k\k\7\3\e\l\v\a\c\e\6\y\s\7\z\l\t\t\q\8\j\h\h\l\t\z\t\4\q\n\s\m\i\p\5\p\3\t\x\8\7\2\4\4\b\i\a\l\d\z\y\7\6\h\f\y\p\x\b\n\x\y\s\k\n\w\5\y\r\c\k\m\e\y\r\0\9\l\5\h\6\7\s\7\v\7\0\x\9\m\8\b\v\1\e\1\i\k\c\s\c\2\a\1\a\i\k\g\3\l\r\l\z\x\x\1\p\1\e\x\d\a\j\2\d\m\6\i\l\n\m\5\7\u\r\w\3\k\1\d\t\p\s\7\z\p\g\r\2\k\n\8\z\d\c\h\g\9\r\7\x\o\u\y\f\y\l\n\h\b\t\u\t\p\y\j\3\b\4\l\s\i\m\u\n\s\a\8\f\p\9\w\3\0\6\p\q\x\o\w\s\w\3\b\k\2\n\8\y\w\4\d\k\f\t\4\g\0\i\5\6\o\o\c\5\x\g\2\y\n\4\j\g\a\0\m\7\b\v\j\6\0\g\g\z\7\l\z\m\g\6\7\l\t\f\q\e\6\5\s\k\7 ]] 00:07:29.132 07:34:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:29.132 07:34:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:29.132 [2024-11-08 07:34:47.051739] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:29.132 [2024-11-08 07:34:47.052325] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60486 ] 00:07:29.391 [2024-11-08 07:34:47.203108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.391 [2024-11-08 07:34:47.255137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.391 [2024-11-08 07:34:47.297075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.391  [2024-11-08T07:34:47.610Z] Copying: 512/512 [B] (average 500 kBps) 00:07:29.649 00:07:29.650 07:34:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jursyt643xbkr9mfn0i39ec8ysab8ur12skw2zoqom1yj94lrtp5w5uc2btsze8yt5ayxx034hi9v3uq8ig9kyi1doh46gduuvirai0jnn0udlcivaid04l8w2y746rh5u6vv5mer4a0yie5elxp7x8pz4p5l9ihn86uw6w6nde2idvskl04zc7k04fe3ulmflv8m26o609zv5scn9yylkaz0m4r4bufegsjsl8wrc1763756fmv3d35ookk73elvace6ys7zlttq8jhhltzt4qnsmip5p3tx87244bialdzy76hfypxbnxysknw5yrckmeyr09l5h67s7v70x9m8bv1e1ikcsc2a1aikg3lrlzxx1p1exdaj2dm6ilnm57urw3k1dtps7zpgr2kn8zdchg9r7xouyfylnhbtutpyj3b4lsimunsa8fp9w306pqxowsw3bk2n8yw4dkft4g0i56ooc5xg2yn4jga0m7bvj60ggz7lzmg67ltfqe65sk7 == \j\u\r\s\y\t\6\4\3\x\b\k\r\9\m\f\n\0\i\3\9\e\c\8\y\s\a\b\8\u\r\1\2\s\k\w\2\z\o\q\o\m\1\y\j\9\4\l\r\t\p\5\w\5\u\c\2\b\t\s\z\e\8\y\t\5\a\y\x\x\0\3\4\h\i\9\v\3\u\q\8\i\g\9\k\y\i\1\d\o\h\4\6\g\d\u\u\v\i\r\a\i\0\j\n\n\0\u\d\l\c\i\v\a\i\d\0\4\l\8\w\2\y\7\4\6\r\h\5\u\6\v\v\5\m\e\r\4\a\0\y\i\e\5\e\l\x\p\7\x\8\p\z\4\p\5\l\9\i\h\n\8\6\u\w\6\w\6\n\d\e\2\i\d\v\s\k\l\0\4\z\c\7\k\0\4\f\e\3\u\l\m\f\l\v\8\m\2\6\o\6\0\9\z\v\5\s\c\n\9\y\y\l\k\a\z\0\m\4\r\4\b\u\f\e\g\s\j\s\l\8\w\r\c\1\7\6\3\7\5\6\f\m\v\3\d\3\5\o\o\k\k\7\3\e\l\v\a\c\e\6\y\s\7\z\l\t\t\q\8\j\h\h\l\t\z\t\4\q\n\s\m\i\p\5\p\3\t\x\8\7\2\4\4\b\i\a\l\d\z\y\7\6\h\f\y\p\x\b\n\x\y\s\k\n\w\5\y\r\c\k\m\e\y\r\0\9\l\5\h\6\7\s\7\v\7\0\x\9\m\8\b\v\1\e\1\i\k\c\s\c\2\a\1\a\i\k\g\3\l\r\l\z\x\x\1\p\1\e\x\d\a\j\2\d\m\6\i\l\n\m\5\7\u\r\w\3\k\1\d\t\p\s\7\z\p\g\r\2\k\n\8\z\d\c\h\g\9\r\7\x\o\u\y\f\y\l\n\h\b\t\u\t\p\y\j\3\b\4\l\s\i\m\u\n\s\a\8\f\p\9\w\3\0\6\p\q\x\o\w\s\w\3\b\k\2\n\8\y\w\4\d\k\f\t\4\g\0\i\5\6\o\o\c\5\x\g\2\y\n\4\j\g\a\0\m\7\b\v\j\6\0\g\g\z\7\l\z\m\g\6\7\l\t\f\q\e\6\5\s\k\7 ]] 00:07:29.650 07:34:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:29.650 07:34:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:29.650 [2024-11-08 07:34:47.530150] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:29.650 [2024-11-08 07:34:47.530240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60489 ] 00:07:29.909 [2024-11-08 07:34:47.671331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.909 [2024-11-08 07:34:47.722410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.909 [2024-11-08 07:34:47.763698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.909  [2024-11-08T07:34:48.130Z] Copying: 512/512 [B] (average 250 kBps) 00:07:30.169 00:07:30.169 07:34:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jursyt643xbkr9mfn0i39ec8ysab8ur12skw2zoqom1yj94lrtp5w5uc2btsze8yt5ayxx034hi9v3uq8ig9kyi1doh46gduuvirai0jnn0udlcivaid04l8w2y746rh5u6vv5mer4a0yie5elxp7x8pz4p5l9ihn86uw6w6nde2idvskl04zc7k04fe3ulmflv8m26o609zv5scn9yylkaz0m4r4bufegsjsl8wrc1763756fmv3d35ookk73elvace6ys7zlttq8jhhltzt4qnsmip5p3tx87244bialdzy76hfypxbnxysknw5yrckmeyr09l5h67s7v70x9m8bv1e1ikcsc2a1aikg3lrlzxx1p1exdaj2dm6ilnm57urw3k1dtps7zpgr2kn8zdchg9r7xouyfylnhbtutpyj3b4lsimunsa8fp9w306pqxowsw3bk2n8yw4dkft4g0i56ooc5xg2yn4jga0m7bvj60ggz7lzmg67ltfqe65sk7 == \j\u\r\s\y\t\6\4\3\x\b\k\r\9\m\f\n\0\i\3\9\e\c\8\y\s\a\b\8\u\r\1\2\s\k\w\2\z\o\q\o\m\1\y\j\9\4\l\r\t\p\5\w\5\u\c\2\b\t\s\z\e\8\y\t\5\a\y\x\x\0\3\4\h\i\9\v\3\u\q\8\i\g\9\k\y\i\1\d\o\h\4\6\g\d\u\u\v\i\r\a\i\0\j\n\n\0\u\d\l\c\i\v\a\i\d\0\4\l\8\w\2\y\7\4\6\r\h\5\u\6\v\v\5\m\e\r\4\a\0\y\i\e\5\e\l\x\p\7\x\8\p\z\4\p\5\l\9\i\h\n\8\6\u\w\6\w\6\n\d\e\2\i\d\v\s\k\l\0\4\z\c\7\k\0\4\f\e\3\u\l\m\f\l\v\8\m\2\6\o\6\0\9\z\v\5\s\c\n\9\y\y\l\k\a\z\0\m\4\r\4\b\u\f\e\g\s\j\s\l\8\w\r\c\1\7\6\3\7\5\6\f\m\v\3\d\3\5\o\o\k\k\7\3\e\l\v\a\c\e\6\y\s\7\z\l\t\t\q\8\j\h\h\l\t\z\t\4\q\n\s\m\i\p\5\p\3\t\x\8\7\2\4\4\b\i\a\l\d\z\y\7\6\h\f\y\p\x\b\n\x\y\s\k\n\w\5\y\r\c\k\m\e\y\r\0\9\l\5\h\6\7\s\7\v\7\0\x\9\m\8\b\v\1\e\1\i\k\c\s\c\2\a\1\a\i\k\g\3\l\r\l\z\x\x\1\p\1\e\x\d\a\j\2\d\m\6\i\l\n\m\5\7\u\r\w\3\k\1\d\t\p\s\7\z\p\g\r\2\k\n\8\z\d\c\h\g\9\r\7\x\o\u\y\f\y\l\n\h\b\t\u\t\p\y\j\3\b\4\l\s\i\m\u\n\s\a\8\f\p\9\w\3\0\6\p\q\x\o\w\s\w\3\b\k\2\n\8\y\w\4\d\k\f\t\4\g\0\i\5\6\o\o\c\5\x\g\2\y\n\4\j\g\a\0\m\7\b\v\j\6\0\g\g\z\7\l\z\m\g\6\7\l\t\f\q\e\6\5\s\k\7 ]] 00:07:30.169 07:34:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:30.169 07:34:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:30.169 [2024-11-08 07:34:48.001387] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:30.169 [2024-11-08 07:34:48.001458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60491 ] 00:07:30.428 [2024-11-08 07:34:48.142908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.428 [2024-11-08 07:34:48.193015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.428 [2024-11-08 07:34:48.233999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.428  [2024-11-08T07:34:48.648Z] Copying: 512/512 [B] (average 500 kBps) 00:07:30.687 00:07:30.687 07:34:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jursyt643xbkr9mfn0i39ec8ysab8ur12skw2zoqom1yj94lrtp5w5uc2btsze8yt5ayxx034hi9v3uq8ig9kyi1doh46gduuvirai0jnn0udlcivaid04l8w2y746rh5u6vv5mer4a0yie5elxp7x8pz4p5l9ihn86uw6w6nde2idvskl04zc7k04fe3ulmflv8m26o609zv5scn9yylkaz0m4r4bufegsjsl8wrc1763756fmv3d35ookk73elvace6ys7zlttq8jhhltzt4qnsmip5p3tx87244bialdzy76hfypxbnxysknw5yrckmeyr09l5h67s7v70x9m8bv1e1ikcsc2a1aikg3lrlzxx1p1exdaj2dm6ilnm57urw3k1dtps7zpgr2kn8zdchg9r7xouyfylnhbtutpyj3b4lsimunsa8fp9w306pqxowsw3bk2n8yw4dkft4g0i56ooc5xg2yn4jga0m7bvj60ggz7lzmg67ltfqe65sk7 == \j\u\r\s\y\t\6\4\3\x\b\k\r\9\m\f\n\0\i\3\9\e\c\8\y\s\a\b\8\u\r\1\2\s\k\w\2\z\o\q\o\m\1\y\j\9\4\l\r\t\p\5\w\5\u\c\2\b\t\s\z\e\8\y\t\5\a\y\x\x\0\3\4\h\i\9\v\3\u\q\8\i\g\9\k\y\i\1\d\o\h\4\6\g\d\u\u\v\i\r\a\i\0\j\n\n\0\u\d\l\c\i\v\a\i\d\0\4\l\8\w\2\y\7\4\6\r\h\5\u\6\v\v\5\m\e\r\4\a\0\y\i\e\5\e\l\x\p\7\x\8\p\z\4\p\5\l\9\i\h\n\8\6\u\w\6\w\6\n\d\e\2\i\d\v\s\k\l\0\4\z\c\7\k\0\4\f\e\3\u\l\m\f\l\v\8\m\2\6\o\6\0\9\z\v\5\s\c\n\9\y\y\l\k\a\z\0\m\4\r\4\b\u\f\e\g\s\j\s\l\8\w\r\c\1\7\6\3\7\5\6\f\m\v\3\d\3\5\o\o\k\k\7\3\e\l\v\a\c\e\6\y\s\7\z\l\t\t\q\8\j\h\h\l\t\z\t\4\q\n\s\m\i\p\5\p\3\t\x\8\7\2\4\4\b\i\a\l\d\z\y\7\6\h\f\y\p\x\b\n\x\y\s\k\n\w\5\y\r\c\k\m\e\y\r\0\9\l\5\h\6\7\s\7\v\7\0\x\9\m\8\b\v\1\e\1\i\k\c\s\c\2\a\1\a\i\k\g\3\l\r\l\z\x\x\1\p\1\e\x\d\a\j\2\d\m\6\i\l\n\m\5\7\u\r\w\3\k\1\d\t\p\s\7\z\p\g\r\2\k\n\8\z\d\c\h\g\9\r\7\x\o\u\y\f\y\l\n\h\b\t\u\t\p\y\j\3\b\4\l\s\i\m\u\n\s\a\8\f\p\9\w\3\0\6\p\q\x\o\w\s\w\3\b\k\2\n\8\y\w\4\d\k\f\t\4\g\0\i\5\6\o\o\c\5\x\g\2\y\n\4\j\g\a\0\m\7\b\v\j\6\0\g\g\z\7\l\z\m\g\6\7\l\t\f\q\e\6\5\s\k\7 ]] 00:07:30.687 07:34:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:30.687 07:34:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:30.687 07:34:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:30.687 07:34:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:30.687 07:34:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:30.687 07:34:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:30.687 [2024-11-08 07:34:48.483698] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:30.688 [2024-11-08 07:34:48.483772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60504 ] 00:07:30.688 [2024-11-08 07:34:48.625983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.946 [2024-11-08 07:34:48.675172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.946 [2024-11-08 07:34:48.716381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.946  [2024-11-08T07:34:48.907Z] Copying: 512/512 [B] (average 500 kBps) 00:07:30.946 00:07:31.204 07:34:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ d1ofhpgn1hhauju3th10xie09hz7op6q7aslakd1jnd23fhb9fq17m5s7781vgntjuoguc4qy677kd532txgawyjygv1rfet542yu5tpaxuctg35i8j3tbxrso3950898osxrsirafr0kmk6y158j647tzoethhanzpfd4bvk358m4lalp35ag62mh5nwfzc6o2kelbqbz3aqcc4w2wv548jwtf93kdqt172j4tn4528tygqk8s14ccqwjllblp84sjjhu8n6hizgunq8ywmckfabf4nva7g9wlt1i1qz25jslaldesb10541151zhw5oxyv6xj9gdwyxm2o06o54031gr0uln28n6a2ytz3onz7bzy24eix3qibzd3u5s1kkl8wb38acilmwk94z2blrj3581u4qo8lmx13k6rlvd4cja1qnapdmnw6aw7p5esecfca2ncgmc4ewjmusumirk6fx27nzkdxvqm1jj9pxvyzxo1jklywwp8013zieh5s == \d\1\o\f\h\p\g\n\1\h\h\a\u\j\u\3\t\h\1\0\x\i\e\0\9\h\z\7\o\p\6\q\7\a\s\l\a\k\d\1\j\n\d\2\3\f\h\b\9\f\q\1\7\m\5\s\7\7\8\1\v\g\n\t\j\u\o\g\u\c\4\q\y\6\7\7\k\d\5\3\2\t\x\g\a\w\y\j\y\g\v\1\r\f\e\t\5\4\2\y\u\5\t\p\a\x\u\c\t\g\3\5\i\8\j\3\t\b\x\r\s\o\3\9\5\0\8\9\8\o\s\x\r\s\i\r\a\f\r\0\k\m\k\6\y\1\5\8\j\6\4\7\t\z\o\e\t\h\h\a\n\z\p\f\d\4\b\v\k\3\5\8\m\4\l\a\l\p\3\5\a\g\6\2\m\h\5\n\w\f\z\c\6\o\2\k\e\l\b\q\b\z\3\a\q\c\c\4\w\2\w\v\5\4\8\j\w\t\f\9\3\k\d\q\t\1\7\2\j\4\t\n\4\5\2\8\t\y\g\q\k\8\s\1\4\c\c\q\w\j\l\l\b\l\p\8\4\s\j\j\h\u\8\n\6\h\i\z\g\u\n\q\8\y\w\m\c\k\f\a\b\f\4\n\v\a\7\g\9\w\l\t\1\i\1\q\z\2\5\j\s\l\a\l\d\e\s\b\1\0\5\4\1\1\5\1\z\h\w\5\o\x\y\v\6\x\j\9\g\d\w\y\x\m\2\o\0\6\o\5\4\0\3\1\g\r\0\u\l\n\2\8\n\6\a\2\y\t\z\3\o\n\z\7\b\z\y\2\4\e\i\x\3\q\i\b\z\d\3\u\5\s\1\k\k\l\8\w\b\3\8\a\c\i\l\m\w\k\9\4\z\2\b\l\r\j\3\5\8\1\u\4\q\o\8\l\m\x\1\3\k\6\r\l\v\d\4\c\j\a\1\q\n\a\p\d\m\n\w\6\a\w\7\p\5\e\s\e\c\f\c\a\2\n\c\g\m\c\4\e\w\j\m\u\s\u\m\i\r\k\6\f\x\2\7\n\z\k\d\x\v\q\m\1\j\j\9\p\x\v\y\z\x\o\1\j\k\l\y\w\w\p\8\0\1\3\z\i\e\h\5\s ]] 00:07:31.204 07:34:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:31.204 07:34:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:31.204 [2024-11-08 07:34:48.954791] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:31.204 [2024-11-08 07:34:48.954870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60506 ] 00:07:31.204 [2024-11-08 07:34:49.091462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.204 [2024-11-08 07:34:49.140584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.463 [2024-11-08 07:34:49.181926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.463  [2024-11-08T07:34:49.424Z] Copying: 512/512 [B] (average 500 kBps) 00:07:31.463 00:07:31.463 07:34:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ d1ofhpgn1hhauju3th10xie09hz7op6q7aslakd1jnd23fhb9fq17m5s7781vgntjuoguc4qy677kd532txgawyjygv1rfet542yu5tpaxuctg35i8j3tbxrso3950898osxrsirafr0kmk6y158j647tzoethhanzpfd4bvk358m4lalp35ag62mh5nwfzc6o2kelbqbz3aqcc4w2wv548jwtf93kdqt172j4tn4528tygqk8s14ccqwjllblp84sjjhu8n6hizgunq8ywmckfabf4nva7g9wlt1i1qz25jslaldesb10541151zhw5oxyv6xj9gdwyxm2o06o54031gr0uln28n6a2ytz3onz7bzy24eix3qibzd3u5s1kkl8wb38acilmwk94z2blrj3581u4qo8lmx13k6rlvd4cja1qnapdmnw6aw7p5esecfca2ncgmc4ewjmusumirk6fx27nzkdxvqm1jj9pxvyzxo1jklywwp8013zieh5s == \d\1\o\f\h\p\g\n\1\h\h\a\u\j\u\3\t\h\1\0\x\i\e\0\9\h\z\7\o\p\6\q\7\a\s\l\a\k\d\1\j\n\d\2\3\f\h\b\9\f\q\1\7\m\5\s\7\7\8\1\v\g\n\t\j\u\o\g\u\c\4\q\y\6\7\7\k\d\5\3\2\t\x\g\a\w\y\j\y\g\v\1\r\f\e\t\5\4\2\y\u\5\t\p\a\x\u\c\t\g\3\5\i\8\j\3\t\b\x\r\s\o\3\9\5\0\8\9\8\o\s\x\r\s\i\r\a\f\r\0\k\m\k\6\y\1\5\8\j\6\4\7\t\z\o\e\t\h\h\a\n\z\p\f\d\4\b\v\k\3\5\8\m\4\l\a\l\p\3\5\a\g\6\2\m\h\5\n\w\f\z\c\6\o\2\k\e\l\b\q\b\z\3\a\q\c\c\4\w\2\w\v\5\4\8\j\w\t\f\9\3\k\d\q\t\1\7\2\j\4\t\n\4\5\2\8\t\y\g\q\k\8\s\1\4\c\c\q\w\j\l\l\b\l\p\8\4\s\j\j\h\u\8\n\6\h\i\z\g\u\n\q\8\y\w\m\c\k\f\a\b\f\4\n\v\a\7\g\9\w\l\t\1\i\1\q\z\2\5\j\s\l\a\l\d\e\s\b\1\0\5\4\1\1\5\1\z\h\w\5\o\x\y\v\6\x\j\9\g\d\w\y\x\m\2\o\0\6\o\5\4\0\3\1\g\r\0\u\l\n\2\8\n\6\a\2\y\t\z\3\o\n\z\7\b\z\y\2\4\e\i\x\3\q\i\b\z\d\3\u\5\s\1\k\k\l\8\w\b\3\8\a\c\i\l\m\w\k\9\4\z\2\b\l\r\j\3\5\8\1\u\4\q\o\8\l\m\x\1\3\k\6\r\l\v\d\4\c\j\a\1\q\n\a\p\d\m\n\w\6\a\w\7\p\5\e\s\e\c\f\c\a\2\n\c\g\m\c\4\e\w\j\m\u\s\u\m\i\r\k\6\f\x\2\7\n\z\k\d\x\v\q\m\1\j\j\9\p\x\v\y\z\x\o\1\j\k\l\y\w\w\p\8\0\1\3\z\i\e\h\5\s ]] 00:07:31.463 07:34:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:31.463 07:34:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:31.723 [2024-11-08 07:34:49.431114] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:31.723 [2024-11-08 07:34:49.431207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60519 ] 00:07:31.723 [2024-11-08 07:34:49.581763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.723 [2024-11-08 07:34:49.628582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.723 [2024-11-08 07:34:49.669707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.982  [2024-11-08T07:34:49.943Z] Copying: 512/512 [B] (average 500 kBps) 00:07:31.982 00:07:31.982 07:34:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ d1ofhpgn1hhauju3th10xie09hz7op6q7aslakd1jnd23fhb9fq17m5s7781vgntjuoguc4qy677kd532txgawyjygv1rfet542yu5tpaxuctg35i8j3tbxrso3950898osxrsirafr0kmk6y158j647tzoethhanzpfd4bvk358m4lalp35ag62mh5nwfzc6o2kelbqbz3aqcc4w2wv548jwtf93kdqt172j4tn4528tygqk8s14ccqwjllblp84sjjhu8n6hizgunq8ywmckfabf4nva7g9wlt1i1qz25jslaldesb10541151zhw5oxyv6xj9gdwyxm2o06o54031gr0uln28n6a2ytz3onz7bzy24eix3qibzd3u5s1kkl8wb38acilmwk94z2blrj3581u4qo8lmx13k6rlvd4cja1qnapdmnw6aw7p5esecfca2ncgmc4ewjmusumirk6fx27nzkdxvqm1jj9pxvyzxo1jklywwp8013zieh5s == \d\1\o\f\h\p\g\n\1\h\h\a\u\j\u\3\t\h\1\0\x\i\e\0\9\h\z\7\o\p\6\q\7\a\s\l\a\k\d\1\j\n\d\2\3\f\h\b\9\f\q\1\7\m\5\s\7\7\8\1\v\g\n\t\j\u\o\g\u\c\4\q\y\6\7\7\k\d\5\3\2\t\x\g\a\w\y\j\y\g\v\1\r\f\e\t\5\4\2\y\u\5\t\p\a\x\u\c\t\g\3\5\i\8\j\3\t\b\x\r\s\o\3\9\5\0\8\9\8\o\s\x\r\s\i\r\a\f\r\0\k\m\k\6\y\1\5\8\j\6\4\7\t\z\o\e\t\h\h\a\n\z\p\f\d\4\b\v\k\3\5\8\m\4\l\a\l\p\3\5\a\g\6\2\m\h\5\n\w\f\z\c\6\o\2\k\e\l\b\q\b\z\3\a\q\c\c\4\w\2\w\v\5\4\8\j\w\t\f\9\3\k\d\q\t\1\7\2\j\4\t\n\4\5\2\8\t\y\g\q\k\8\s\1\4\c\c\q\w\j\l\l\b\l\p\8\4\s\j\j\h\u\8\n\6\h\i\z\g\u\n\q\8\y\w\m\c\k\f\a\b\f\4\n\v\a\7\g\9\w\l\t\1\i\1\q\z\2\5\j\s\l\a\l\d\e\s\b\1\0\5\4\1\1\5\1\z\h\w\5\o\x\y\v\6\x\j\9\g\d\w\y\x\m\2\o\0\6\o\5\4\0\3\1\g\r\0\u\l\n\2\8\n\6\a\2\y\t\z\3\o\n\z\7\b\z\y\2\4\e\i\x\3\q\i\b\z\d\3\u\5\s\1\k\k\l\8\w\b\3\8\a\c\i\l\m\w\k\9\4\z\2\b\l\r\j\3\5\8\1\u\4\q\o\8\l\m\x\1\3\k\6\r\l\v\d\4\c\j\a\1\q\n\a\p\d\m\n\w\6\a\w\7\p\5\e\s\e\c\f\c\a\2\n\c\g\m\c\4\e\w\j\m\u\s\u\m\i\r\k\6\f\x\2\7\n\z\k\d\x\v\q\m\1\j\j\9\p\x\v\y\z\x\o\1\j\k\l\y\w\w\p\8\0\1\3\z\i\e\h\5\s ]] 00:07:31.982 07:34:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:31.982 07:34:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:31.982 [2024-11-08 07:34:49.908602] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:31.982 [2024-11-08 07:34:49.908678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60521 ] 00:07:32.301 [2024-11-08 07:34:50.049551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.301 [2024-11-08 07:34:50.098599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.301 [2024-11-08 07:34:50.140024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.301  [2024-11-08T07:34:50.521Z] Copying: 512/512 [B] (average 500 kBps) 00:07:32.560 00:07:32.560 07:34:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ d1ofhpgn1hhauju3th10xie09hz7op6q7aslakd1jnd23fhb9fq17m5s7781vgntjuoguc4qy677kd532txgawyjygv1rfet542yu5tpaxuctg35i8j3tbxrso3950898osxrsirafr0kmk6y158j647tzoethhanzpfd4bvk358m4lalp35ag62mh5nwfzc6o2kelbqbz3aqcc4w2wv548jwtf93kdqt172j4tn4528tygqk8s14ccqwjllblp84sjjhu8n6hizgunq8ywmckfabf4nva7g9wlt1i1qz25jslaldesb10541151zhw5oxyv6xj9gdwyxm2o06o54031gr0uln28n6a2ytz3onz7bzy24eix3qibzd3u5s1kkl8wb38acilmwk94z2blrj3581u4qo8lmx13k6rlvd4cja1qnapdmnw6aw7p5esecfca2ncgmc4ewjmusumirk6fx27nzkdxvqm1jj9pxvyzxo1jklywwp8013zieh5s == \d\1\o\f\h\p\g\n\1\h\h\a\u\j\u\3\t\h\1\0\x\i\e\0\9\h\z\7\o\p\6\q\7\a\s\l\a\k\d\1\j\n\d\2\3\f\h\b\9\f\q\1\7\m\5\s\7\7\8\1\v\g\n\t\j\u\o\g\u\c\4\q\y\6\7\7\k\d\5\3\2\t\x\g\a\w\y\j\y\g\v\1\r\f\e\t\5\4\2\y\u\5\t\p\a\x\u\c\t\g\3\5\i\8\j\3\t\b\x\r\s\o\3\9\5\0\8\9\8\o\s\x\r\s\i\r\a\f\r\0\k\m\k\6\y\1\5\8\j\6\4\7\t\z\o\e\t\h\h\a\n\z\p\f\d\4\b\v\k\3\5\8\m\4\l\a\l\p\3\5\a\g\6\2\m\h\5\n\w\f\z\c\6\o\2\k\e\l\b\q\b\z\3\a\q\c\c\4\w\2\w\v\5\4\8\j\w\t\f\9\3\k\d\q\t\1\7\2\j\4\t\n\4\5\2\8\t\y\g\q\k\8\s\1\4\c\c\q\w\j\l\l\b\l\p\8\4\s\j\j\h\u\8\n\6\h\i\z\g\u\n\q\8\y\w\m\c\k\f\a\b\f\4\n\v\a\7\g\9\w\l\t\1\i\1\q\z\2\5\j\s\l\a\l\d\e\s\b\1\0\5\4\1\1\5\1\z\h\w\5\o\x\y\v\6\x\j\9\g\d\w\y\x\m\2\o\0\6\o\5\4\0\3\1\g\r\0\u\l\n\2\8\n\6\a\2\y\t\z\3\o\n\z\7\b\z\y\2\4\e\i\x\3\q\i\b\z\d\3\u\5\s\1\k\k\l\8\w\b\3\8\a\c\i\l\m\w\k\9\4\z\2\b\l\r\j\3\5\8\1\u\4\q\o\8\l\m\x\1\3\k\6\r\l\v\d\4\c\j\a\1\q\n\a\p\d\m\n\w\6\a\w\7\p\5\e\s\e\c\f\c\a\2\n\c\g\m\c\4\e\w\j\m\u\s\u\m\i\r\k\6\f\x\2\7\n\z\k\d\x\v\q\m\1\j\j\9\p\x\v\y\z\x\o\1\j\k\l\y\w\w\p\8\0\1\3\z\i\e\h\5\s ]] 00:07:32.560 00:07:32.560 real 0m3.836s 00:07:32.560 user 0m1.933s 00:07:32.560 sys 0m0.945s 00:07:32.560 ************************************ 00:07:32.560 END TEST dd_flags_misc_forced_aio 00:07:32.560 ************************************ 00:07:32.560 07:34:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.560 07:34:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:32.560 07:34:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:32.560 07:34:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:32.560 07:34:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:32.560 00:07:32.560 real 0m18.383s 00:07:32.560 user 0m8.310s 00:07:32.560 sys 0m5.795s 00:07:32.560 07:34:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.560 07:34:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:32.560 ************************************ 00:07:32.560 END TEST spdk_dd_posix 00:07:32.560 ************************************ 00:07:32.560 07:34:50 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:32.560 07:34:50 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:32.560 07:34:50 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:32.560 07:34:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:32.560 ************************************ 00:07:32.560 START TEST spdk_dd_malloc 00:07:32.560 ************************************ 00:07:32.560 07:34:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:32.856 * Looking for test storage... 00:07:32.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:32.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.856 --rc genhtml_branch_coverage=1 00:07:32.856 --rc genhtml_function_coverage=1 00:07:32.856 --rc genhtml_legend=1 00:07:32.856 --rc geninfo_all_blocks=1 00:07:32.856 --rc geninfo_unexecuted_blocks=1 00:07:32.856 00:07:32.856 ' 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:32.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.856 --rc genhtml_branch_coverage=1 00:07:32.856 --rc genhtml_function_coverage=1 00:07:32.856 --rc genhtml_legend=1 00:07:32.856 --rc geninfo_all_blocks=1 00:07:32.856 --rc geninfo_unexecuted_blocks=1 00:07:32.856 00:07:32.856 ' 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:32.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.856 --rc genhtml_branch_coverage=1 00:07:32.856 --rc genhtml_function_coverage=1 00:07:32.856 --rc genhtml_legend=1 00:07:32.856 --rc geninfo_all_blocks=1 00:07:32.856 --rc geninfo_unexecuted_blocks=1 00:07:32.856 00:07:32.856 ' 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:32.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.856 --rc genhtml_branch_coverage=1 00:07:32.856 --rc genhtml_function_coverage=1 00:07:32.856 --rc genhtml_legend=1 00:07:32.856 --rc geninfo_all_blocks=1 00:07:32.856 --rc geninfo_unexecuted_blocks=1 00:07:32.856 00:07:32.856 ' 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:32.856 ************************************ 00:07:32.856 START TEST dd_malloc_copy 00:07:32.856 ************************************ 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1127 -- # malloc_copy 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:32.856 07:34:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:32.856 [2024-11-08 07:34:50.727246] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:32.856 [2024-11-08 07:34:50.727322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60603 ] 00:07:32.856 { 00:07:32.856 "subsystems": [ 00:07:32.856 { 00:07:32.856 "subsystem": "bdev", 00:07:32.856 "config": [ 00:07:32.856 { 00:07:32.856 "params": { 00:07:32.856 "block_size": 512, 00:07:32.856 "num_blocks": 1048576, 00:07:32.856 "name": "malloc0" 00:07:32.856 }, 00:07:32.856 "method": "bdev_malloc_create" 00:07:32.856 }, 00:07:32.856 { 00:07:32.856 "params": { 00:07:32.856 "block_size": 512, 00:07:32.856 "num_blocks": 1048576, 00:07:32.856 "name": "malloc1" 00:07:32.856 }, 00:07:32.856 "method": "bdev_malloc_create" 00:07:32.856 }, 00:07:32.856 { 00:07:32.856 "method": "bdev_wait_for_examine" 00:07:32.856 } 00:07:32.856 ] 00:07:32.856 } 00:07:32.856 ] 00:07:32.856 } 00:07:33.115 [2024-11-08 07:34:50.865227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.115 [2024-11-08 07:34:50.915362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.115 [2024-11-08 07:34:50.957012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.492  [2024-11-08T07:34:53.389Z] Copying: 265/512 [MB] (265 MBps) [2024-11-08T07:34:53.648Z] Copying: 512/512 [MB] (average 264 MBps) 00:07:35.687 00:07:35.687 07:34:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:35.687 07:34:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:35.687 07:34:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:35.687 07:34:53 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:35.947 { 00:07:35.947 "subsystems": [ 00:07:35.947 { 00:07:35.947 "subsystem": "bdev", 00:07:35.947 "config": [ 00:07:35.947 { 00:07:35.947 "params": { 00:07:35.947 "block_size": 512, 00:07:35.947 "num_blocks": 1048576, 00:07:35.947 "name": "malloc0" 00:07:35.947 }, 00:07:35.947 "method": "bdev_malloc_create" 00:07:35.947 }, 00:07:35.947 { 00:07:35.947 "params": { 00:07:35.947 "block_size": 512, 00:07:35.947 "num_blocks": 1048576, 00:07:35.947 "name": "malloc1" 00:07:35.947 }, 00:07:35.947 "method": "bdev_malloc_create" 00:07:35.947 }, 00:07:35.947 { 00:07:35.947 "method": "bdev_wait_for_examine" 00:07:35.947 } 00:07:35.947 ] 00:07:35.947 } 00:07:35.947 ] 00:07:35.947 } 00:07:35.947 [2024-11-08 07:34:53.676466] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:35.947 [2024-11-08 07:34:53.677315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60647 ] 00:07:35.947 [2024-11-08 07:34:53.826103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.947 [2024-11-08 07:34:53.876174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.205 [2024-11-08 07:34:53.918009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.585  [2024-11-08T07:34:56.483Z] Copying: 265/512 [MB] (265 MBps) [2024-11-08T07:34:56.742Z] Copying: 512/512 [MB] (average 264 MBps) 00:07:38.781 00:07:38.781 00:07:38.781 real 0m5.904s 00:07:38.781 user 0m5.064s 00:07:38.781 sys 0m0.682s 00:07:38.781 07:34:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.781 07:34:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:38.781 ************************************ 00:07:38.781 END TEST dd_malloc_copy 00:07:38.781 ************************************ 00:07:38.781 00:07:38.781 real 0m6.186s 00:07:38.781 user 0m5.202s 00:07:38.781 sys 0m0.827s 00:07:38.781 07:34:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.781 07:34:56 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:38.781 ************************************ 00:07:38.781 END TEST spdk_dd_malloc 00:07:38.781 ************************************ 00:07:38.781 07:34:56 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:38.781 07:34:56 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:38.781 07:34:56 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:38.781 07:34:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:38.781 ************************************ 00:07:38.781 START TEST spdk_dd_bdev_to_bdev 00:07:38.781 ************************************ 00:07:38.781 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:39.097 * Looking for test storage... 00:07:39.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lcov --version 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:39.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.097 --rc genhtml_branch_coverage=1 00:07:39.097 --rc genhtml_function_coverage=1 00:07:39.097 --rc genhtml_legend=1 00:07:39.097 --rc geninfo_all_blocks=1 00:07:39.097 --rc geninfo_unexecuted_blocks=1 00:07:39.097 00:07:39.097 ' 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:39.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.097 --rc genhtml_branch_coverage=1 00:07:39.097 --rc genhtml_function_coverage=1 00:07:39.097 --rc genhtml_legend=1 00:07:39.097 --rc geninfo_all_blocks=1 00:07:39.097 --rc geninfo_unexecuted_blocks=1 00:07:39.097 00:07:39.097 ' 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:39.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.097 --rc genhtml_branch_coverage=1 00:07:39.097 --rc genhtml_function_coverage=1 00:07:39.097 --rc genhtml_legend=1 00:07:39.097 --rc geninfo_all_blocks=1 00:07:39.097 --rc geninfo_unexecuted_blocks=1 00:07:39.097 00:07:39.097 ' 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:39.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.097 --rc genhtml_branch_coverage=1 00:07:39.097 --rc genhtml_function_coverage=1 00:07:39.097 --rc genhtml_legend=1 00:07:39.097 --rc geninfo_all_blocks=1 00:07:39.097 --rc geninfo_unexecuted_blocks=1 00:07:39.097 00:07:39.097 ' 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:39.097 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:39.097 ************************************ 00:07:39.097 START TEST dd_inflate_file 00:07:39.098 ************************************ 00:07:39.098 07:34:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:39.098 [2024-11-08 07:34:56.961283] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:39.098 [2024-11-08 07:34:56.961373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60754 ] 00:07:39.356 [2024-11-08 07:34:57.101435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.356 [2024-11-08 07:34:57.149630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.356 [2024-11-08 07:34:57.191409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.356  [2024-11-08T07:34:57.576Z] Copying: 64/64 [MB] (average 1361 MBps) 00:07:39.615 00:07:39.615 00:07:39.615 real 0m0.495s 00:07:39.615 user 0m0.278s 00:07:39.615 sys 0m0.267s 00:07:39.615 07:34:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:39.615 07:34:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:39.615 ************************************ 00:07:39.615 END TEST dd_inflate_file 00:07:39.615 ************************************ 00:07:39.615 07:34:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:39.615 07:34:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:39.615 07:34:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:39.615 07:34:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:39.615 07:34:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:39.615 07:34:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:39.615 07:34:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:39.615 07:34:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:39.615 07:34:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:39.615 ************************************ 00:07:39.615 START TEST dd_copy_to_out_bdev 00:07:39.615 ************************************ 00:07:39.615 07:34:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:39.615 [2024-11-08 07:34:57.522274] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:39.615 [2024-11-08 07:34:57.522343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60787 ] 00:07:39.615 { 00:07:39.615 "subsystems": [ 00:07:39.615 { 00:07:39.615 "subsystem": "bdev", 00:07:39.615 "config": [ 00:07:39.615 { 00:07:39.615 "params": { 00:07:39.615 "trtype": "pcie", 00:07:39.615 "traddr": "0000:00:10.0", 00:07:39.615 "name": "Nvme0" 00:07:39.615 }, 00:07:39.615 "method": "bdev_nvme_attach_controller" 00:07:39.615 }, 00:07:39.615 { 00:07:39.615 "params": { 00:07:39.615 "trtype": "pcie", 00:07:39.615 "traddr": "0000:00:11.0", 00:07:39.615 "name": "Nvme1" 00:07:39.615 }, 00:07:39.615 "method": "bdev_nvme_attach_controller" 00:07:39.615 }, 00:07:39.615 { 00:07:39.615 "method": "bdev_wait_for_examine" 00:07:39.615 } 00:07:39.615 ] 00:07:39.615 } 00:07:39.615 ] 00:07:39.615 } 00:07:39.874 [2024-11-08 07:34:57.662817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.874 [2024-11-08 07:34:57.713114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.874 [2024-11-08 07:34:57.755001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.808  [2024-11-08T07:34:59.027Z] Copying: 64/64 [MB] (average 87 MBps) 00:07:41.066 00:07:41.066 00:07:41.066 real 0m1.370s 00:07:41.066 user 0m1.168s 00:07:41.066 sys 0m1.035s 00:07:41.066 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:41.066 ************************************ 00:07:41.066 END TEST dd_copy_to_out_bdev 00:07:41.066 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:41.067 ************************************ 00:07:41.067 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:41.067 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:41.067 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:41.067 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:41.067 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:41.067 ************************************ 00:07:41.067 START TEST dd_offset_magic 00:07:41.067 ************************************ 00:07:41.067 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1127 -- # offset_magic 00:07:41.067 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:41.067 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:41.067 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:41.067 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:41.067 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:41.067 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:41.067 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:41.067 07:34:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:41.067 { 00:07:41.067 "subsystems": [ 00:07:41.067 { 00:07:41.067 "subsystem": "bdev", 00:07:41.067 "config": [ 00:07:41.067 { 00:07:41.067 "params": { 00:07:41.067 "trtype": "pcie", 00:07:41.067 "traddr": "0000:00:10.0", 00:07:41.067 "name": "Nvme0" 00:07:41.067 }, 00:07:41.067 "method": "bdev_nvme_attach_controller" 00:07:41.067 }, 00:07:41.067 { 00:07:41.067 "params": { 00:07:41.067 "trtype": "pcie", 00:07:41.067 "traddr": "0000:00:11.0", 00:07:41.067 "name": "Nvme1" 00:07:41.067 }, 00:07:41.067 "method": "bdev_nvme_attach_controller" 00:07:41.067 }, 00:07:41.067 { 00:07:41.067 "method": "bdev_wait_for_examine" 00:07:41.067 } 00:07:41.067 ] 00:07:41.067 } 00:07:41.067 ] 00:07:41.067 } 00:07:41.067 [2024-11-08 07:34:58.970511] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:41.067 [2024-11-08 07:34:58.970634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60827 ] 00:07:41.326 [2024-11-08 07:34:59.117575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.326 [2024-11-08 07:34:59.169845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.326 [2024-11-08 07:34:59.212067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.584  [2024-11-08T07:34:59.804Z] Copying: 65/65 [MB] (average 915 MBps) 00:07:41.843 00:07:41.843 07:34:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:41.843 07:34:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:41.843 07:34:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:41.843 07:34:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:41.844 [2024-11-08 07:34:59.707068] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:41.844 [2024-11-08 07:34:59.707164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60847 ] 00:07:41.844 { 00:07:41.844 "subsystems": [ 00:07:41.844 { 00:07:41.844 "subsystem": "bdev", 00:07:41.844 "config": [ 00:07:41.844 { 00:07:41.844 "params": { 00:07:41.844 "trtype": "pcie", 00:07:41.844 "traddr": "0000:00:10.0", 00:07:41.844 "name": "Nvme0" 00:07:41.844 }, 00:07:41.844 "method": "bdev_nvme_attach_controller" 00:07:41.844 }, 00:07:41.844 { 00:07:41.844 "params": { 00:07:41.844 "trtype": "pcie", 00:07:41.844 "traddr": "0000:00:11.0", 00:07:41.844 "name": "Nvme1" 00:07:41.844 }, 00:07:41.844 "method": "bdev_nvme_attach_controller" 00:07:41.844 }, 00:07:41.844 { 00:07:41.844 "method": "bdev_wait_for_examine" 00:07:41.844 } 00:07:41.844 ] 00:07:41.844 } 00:07:41.844 ] 00:07:41.844 } 00:07:42.102 [2024-11-08 07:34:59.855343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.102 [2024-11-08 07:34:59.903747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.102 [2024-11-08 07:34:59.945509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.361  [2024-11-08T07:35:00.322Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:42.361 00:07:42.361 07:35:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:42.361 07:35:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:42.361 07:35:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:42.361 07:35:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:42.361 07:35:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:42.361 07:35:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:42.361 07:35:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:42.620 { 00:07:42.620 "subsystems": [ 00:07:42.620 { 00:07:42.620 "subsystem": "bdev", 00:07:42.620 "config": [ 00:07:42.620 { 00:07:42.620 "params": { 00:07:42.620 "trtype": "pcie", 00:07:42.620 "traddr": "0000:00:10.0", 00:07:42.620 "name": "Nvme0" 00:07:42.620 }, 00:07:42.620 "method": "bdev_nvme_attach_controller" 00:07:42.620 }, 00:07:42.620 { 00:07:42.620 "params": { 00:07:42.620 "trtype": "pcie", 00:07:42.620 "traddr": "0000:00:11.0", 00:07:42.620 "name": "Nvme1" 00:07:42.620 }, 00:07:42.620 "method": "bdev_nvme_attach_controller" 00:07:42.620 }, 00:07:42.620 { 00:07:42.620 "method": "bdev_wait_for_examine" 00:07:42.620 } 00:07:42.620 ] 00:07:42.620 } 00:07:42.620 ] 00:07:42.620 } 00:07:42.620 [2024-11-08 07:35:00.327259] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:42.620 [2024-11-08 07:35:00.327354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60858 ] 00:07:42.620 [2024-11-08 07:35:00.477384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.620 [2024-11-08 07:35:00.526163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.620 [2024-11-08 07:35:00.568334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.879  [2024-11-08T07:35:01.099Z] Copying: 65/65 [MB] (average 1048 MBps) 00:07:43.138 00:07:43.138 07:35:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:43.139 07:35:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:43.139 07:35:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:43.139 07:35:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:43.139 { 00:07:43.139 "subsystems": [ 00:07:43.139 { 00:07:43.139 "subsystem": "bdev", 00:07:43.139 "config": [ 00:07:43.139 { 00:07:43.139 "params": { 00:07:43.139 "trtype": "pcie", 00:07:43.139 "traddr": "0000:00:10.0", 00:07:43.139 "name": "Nvme0" 00:07:43.139 }, 00:07:43.139 "method": "bdev_nvme_attach_controller" 00:07:43.139 }, 00:07:43.139 { 00:07:43.139 "params": { 00:07:43.139 "trtype": "pcie", 00:07:43.139 "traddr": "0000:00:11.0", 00:07:43.139 "name": "Nvme1" 00:07:43.139 }, 00:07:43.139 "method": "bdev_nvme_attach_controller" 00:07:43.139 }, 00:07:43.139 { 00:07:43.139 "method": "bdev_wait_for_examine" 00:07:43.139 } 00:07:43.139 ] 00:07:43.139 } 00:07:43.139 ] 00:07:43.139 } 00:07:43.139 [2024-11-08 07:35:01.047383] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:43.139 [2024-11-08 07:35:01.047485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60878 ] 00:07:43.398 [2024-11-08 07:35:01.195155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.398 [2024-11-08 07:35:01.241703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.398 [2024-11-08 07:35:01.283808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.656  [2024-11-08T07:35:01.617Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:43.656 00:07:43.656 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:43.656 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:43.656 00:07:43.656 real 0m2.695s 00:07:43.656 user 0m1.916s 00:07:43.656 sys 0m0.796s 00:07:43.656 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:43.657 ************************************ 00:07:43.657 END TEST dd_offset_magic 00:07:43.657 ************************************ 00:07:43.657 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:43.916 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:43.916 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:43.916 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:43.916 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:43.916 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:43.916 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:43.916 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:43.916 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:43.916 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:43.916 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:43.916 07:35:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:43.916 { 00:07:43.916 "subsystems": [ 00:07:43.916 { 00:07:43.916 "subsystem": "bdev", 00:07:43.916 "config": [ 00:07:43.916 { 00:07:43.916 "params": { 00:07:43.916 "trtype": "pcie", 00:07:43.916 "traddr": "0000:00:10.0", 00:07:43.916 "name": "Nvme0" 00:07:43.916 }, 00:07:43.916 "method": "bdev_nvme_attach_controller" 00:07:43.916 }, 00:07:43.916 { 00:07:43.916 "params": { 00:07:43.916 "trtype": "pcie", 00:07:43.916 "traddr": "0000:00:11.0", 00:07:43.916 "name": "Nvme1" 00:07:43.916 }, 00:07:43.916 "method": "bdev_nvme_attach_controller" 00:07:43.916 }, 00:07:43.916 { 00:07:43.916 "method": "bdev_wait_for_examine" 00:07:43.916 } 00:07:43.916 ] 00:07:43.916 } 00:07:43.916 ] 00:07:43.916 } 00:07:43.916 [2024-11-08 07:35:01.716362] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:43.916 [2024-11-08 07:35:01.716970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60910 ] 00:07:43.916 [2024-11-08 07:35:01.866829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.174 [2024-11-08 07:35:01.918770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.174 [2024-11-08 07:35:01.960767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.174  [2024-11-08T07:35:02.394Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:44.433 00:07:44.433 07:35:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:44.433 07:35:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:44.433 07:35:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:44.433 07:35:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:44.433 07:35:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:44.433 07:35:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:44.433 07:35:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:44.433 07:35:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:44.433 07:35:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:44.433 07:35:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:44.433 [2024-11-08 07:35:02.349970] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:44.433 [2024-11-08 07:35:02.350255] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60926 ] 00:07:44.433 { 00:07:44.433 "subsystems": [ 00:07:44.433 { 00:07:44.433 "subsystem": "bdev", 00:07:44.433 "config": [ 00:07:44.433 { 00:07:44.433 "params": { 00:07:44.433 "trtype": "pcie", 00:07:44.433 "traddr": "0000:00:10.0", 00:07:44.433 "name": "Nvme0" 00:07:44.433 }, 00:07:44.433 "method": "bdev_nvme_attach_controller" 00:07:44.433 }, 00:07:44.433 { 00:07:44.433 "params": { 00:07:44.433 "trtype": "pcie", 00:07:44.433 "traddr": "0000:00:11.0", 00:07:44.433 "name": "Nvme1" 00:07:44.433 }, 00:07:44.433 "method": "bdev_nvme_attach_controller" 00:07:44.433 }, 00:07:44.433 { 00:07:44.433 "method": "bdev_wait_for_examine" 00:07:44.433 } 00:07:44.433 ] 00:07:44.433 } 00:07:44.433 ] 00:07:44.433 } 00:07:44.693 [2024-11-08 07:35:02.498773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.693 [2024-11-08 07:35:02.549339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.693 [2024-11-08 07:35:02.591507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.951  [2024-11-08T07:35:03.177Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:07:45.216 00:07:45.216 07:35:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:45.216 ************************************ 00:07:45.216 END TEST spdk_dd_bdev_to_bdev 00:07:45.216 ************************************ 00:07:45.216 00:07:45.216 real 0m6.268s 00:07:45.216 user 0m4.405s 00:07:45.216 sys 0m2.846s 00:07:45.216 07:35:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.216 07:35:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:45.216 07:35:03 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:45.216 07:35:03 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:45.216 07:35:03 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:45.216 07:35:03 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.216 07:35:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:45.216 ************************************ 00:07:45.216 START TEST spdk_dd_uring 00:07:45.216 ************************************ 00:07:45.216 07:35:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:45.216 * Looking for test storage... 00:07:45.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:45.216 07:35:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:45.216 07:35:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lcov --version 00:07:45.216 07:35:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:45.483 07:35:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:45.483 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.483 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.483 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.483 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.483 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.483 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.483 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:45.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.484 --rc genhtml_branch_coverage=1 00:07:45.484 --rc genhtml_function_coverage=1 00:07:45.484 --rc genhtml_legend=1 00:07:45.484 --rc geninfo_all_blocks=1 00:07:45.484 --rc geninfo_unexecuted_blocks=1 00:07:45.484 00:07:45.484 ' 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:45.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.484 --rc genhtml_branch_coverage=1 00:07:45.484 --rc genhtml_function_coverage=1 00:07:45.484 --rc genhtml_legend=1 00:07:45.484 --rc geninfo_all_blocks=1 00:07:45.484 --rc geninfo_unexecuted_blocks=1 00:07:45.484 00:07:45.484 ' 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:45.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.484 --rc genhtml_branch_coverage=1 00:07:45.484 --rc genhtml_function_coverage=1 00:07:45.484 --rc genhtml_legend=1 00:07:45.484 --rc geninfo_all_blocks=1 00:07:45.484 --rc geninfo_unexecuted_blocks=1 00:07:45.484 00:07:45.484 ' 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:45.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.484 --rc genhtml_branch_coverage=1 00:07:45.484 --rc genhtml_function_coverage=1 00:07:45.484 --rc genhtml_legend=1 00:07:45.484 --rc geninfo_all_blocks=1 00:07:45.484 --rc geninfo_unexecuted_blocks=1 00:07:45.484 00:07:45.484 ' 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:45.484 ************************************ 00:07:45.484 START TEST dd_uring_copy 00:07:45.484 ************************************ 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1127 -- # uring_zram_copy 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:45.484 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=55l0sb47fu5xfeh7kxmmymjclws20d2u6duh6jk2ab8jq4lu6rwc40gnys27rj83ihvr33zmdsozt9csppzmw0v5h9bpj17ylreid79leeuzxdon6ljrt42aoow02f0ntabqkq63sp23hbmd5fdzana3ohze3dq9jvvl2v4bklqspvefazya4yb27qn91dbznr67v640u3fokhln4vxz777j0axwau50nebx83gmwoax90vlz27h2g4kew3wintl501x9riushirfefon0s40048mdy3fcujo1ebnjd3n60uvln0e4q13u88ytx82yo4i28o5vxvpstde8m4rhvacipp5qw68yteyhzej3h1glm4owkwo8r2ai79y0q6t7rq80so9t12l78hrm5eleysfasec88jcqt9xcxkphuqhp7wcxk5r05rnkl6xhxv7zwzut6te244lfp5rec59wex92waist7vfos47miymdpjal2q31h53m3x5wo90iupwr8rg7nstu4k4q2wk7he5ib2topfftshrkupkke4d7lkxuo8cukq0vgj35fpjiobglm7tlq4bx2u80i7vz4fbonz21mn4t7mqh2yfjhgnbbu9fm7bx12obgbe4rqy8ryibr5zsxmngp3uqfvl56oizrxpe1x12ayvqi7cuycdp2e5hjpl9blqvwdgedi55o92liubcxz6uzmgmvrk6em4t1m1d3pnpzb8yephia0px3nm1aejkf6o0ce15g2a31temjsafzfg5haqhzprubonfgcc5y7taqdb5osy1qwj7ddq7v7rx25jdd0nmdzulg9gzjkdqtvl0qju6omjwihs6asb4z0cmeese5x708g8vs0hw0o2q2m3hivxed8rsz0rq0jqrajtw52cg2l68t6oy6td3blri1usb180kwwgi98c4t1pucdkgntadkwl8bxvl5vdmoyjzx60wad5hiz421ac189329rdvjoyn0hg2eull7ige1pcv6ab8rbpvthbc5 00:07:45.485 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 55l0sb47fu5xfeh7kxmmymjclws20d2u6duh6jk2ab8jq4lu6rwc40gnys27rj83ihvr33zmdsozt9csppzmw0v5h9bpj17ylreid79leeuzxdon6ljrt42aoow02f0ntabqkq63sp23hbmd5fdzana3ohze3dq9jvvl2v4bklqspvefazya4yb27qn91dbznr67v640u3fokhln4vxz777j0axwau50nebx83gmwoax90vlz27h2g4kew3wintl501x9riushirfefon0s40048mdy3fcujo1ebnjd3n60uvln0e4q13u88ytx82yo4i28o5vxvpstde8m4rhvacipp5qw68yteyhzej3h1glm4owkwo8r2ai79y0q6t7rq80so9t12l78hrm5eleysfasec88jcqt9xcxkphuqhp7wcxk5r05rnkl6xhxv7zwzut6te244lfp5rec59wex92waist7vfos47miymdpjal2q31h53m3x5wo90iupwr8rg7nstu4k4q2wk7he5ib2topfftshrkupkke4d7lkxuo8cukq0vgj35fpjiobglm7tlq4bx2u80i7vz4fbonz21mn4t7mqh2yfjhgnbbu9fm7bx12obgbe4rqy8ryibr5zsxmngp3uqfvl56oizrxpe1x12ayvqi7cuycdp2e5hjpl9blqvwdgedi55o92liubcxz6uzmgmvrk6em4t1m1d3pnpzb8yephia0px3nm1aejkf6o0ce15g2a31temjsafzfg5haqhzprubonfgcc5y7taqdb5osy1qwj7ddq7v7rx25jdd0nmdzulg9gzjkdqtvl0qju6omjwihs6asb4z0cmeese5x708g8vs0hw0o2q2m3hivxed8rsz0rq0jqrajtw52cg2l68t6oy6td3blri1usb180kwwgi98c4t1pucdkgntadkwl8bxvl5vdmoyjzx60wad5hiz421ac189329rdvjoyn0hg2eull7ige1pcv6ab8rbpvthbc5 00:07:45.485 07:35:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:45.485 [2024-11-08 07:35:03.307438] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:45.485 [2024-11-08 07:35:03.307713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61005 ] 00:07:45.742 [2024-11-08 07:35:03.458363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.742 [2024-11-08 07:35:03.508844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.743 [2024-11-08 07:35:03.550391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.309  [2024-11-08T07:35:04.528Z] Copying: 511/511 [MB] (average 1641 MBps) 00:07:46.567 00:07:46.567 07:35:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:46.567 07:35:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:46.567 07:35:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:46.567 07:35:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:46.567 [2024-11-08 07:35:04.425628] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:46.567 [2024-11-08 07:35:04.425710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61022 ] 00:07:46.567 { 00:07:46.567 "subsystems": [ 00:07:46.567 { 00:07:46.567 "subsystem": "bdev", 00:07:46.567 "config": [ 00:07:46.567 { 00:07:46.567 "params": { 00:07:46.567 "block_size": 512, 00:07:46.567 "num_blocks": 1048576, 00:07:46.567 "name": "malloc0" 00:07:46.567 }, 00:07:46.567 "method": "bdev_malloc_create" 00:07:46.567 }, 00:07:46.567 { 00:07:46.567 "params": { 00:07:46.567 "filename": "/dev/zram1", 00:07:46.567 "name": "uring0" 00:07:46.567 }, 00:07:46.567 "method": "bdev_uring_create" 00:07:46.567 }, 00:07:46.567 { 00:07:46.567 "method": "bdev_wait_for_examine" 00:07:46.567 } 00:07:46.567 ] 00:07:46.567 } 00:07:46.567 ] 00:07:46.567 } 00:07:46.826 [2024-11-08 07:35:04.567472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.826 [2024-11-08 07:35:04.616085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.826 [2024-11-08 07:35:04.658702] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.200  [2024-11-08T07:35:07.097Z] Copying: 254/512 [MB] (254 MBps) [2024-11-08T07:35:07.097Z] Copying: 507/512 [MB] (252 MBps) [2024-11-08T07:35:07.356Z] Copying: 512/512 [MB] (average 253 MBps) 00:07:49.395 00:07:49.395 07:35:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:49.395 07:35:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:49.395 07:35:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:49.395 07:35:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:49.395 [2024-11-08 07:35:07.212450] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:49.395 { 00:07:49.395 "subsystems": [ 00:07:49.395 { 00:07:49.395 "subsystem": "bdev", 00:07:49.395 "config": [ 00:07:49.395 { 00:07:49.395 "params": { 00:07:49.395 "block_size": 512, 00:07:49.395 "num_blocks": 1048576, 00:07:49.395 "name": "malloc0" 00:07:49.395 }, 00:07:49.395 "method": "bdev_malloc_create" 00:07:49.395 }, 00:07:49.395 { 00:07:49.395 "params": { 00:07:49.395 "filename": "/dev/zram1", 00:07:49.395 "name": "uring0" 00:07:49.395 }, 00:07:49.395 "method": "bdev_uring_create" 00:07:49.395 }, 00:07:49.395 { 00:07:49.395 "method": "bdev_wait_for_examine" 00:07:49.395 } 00:07:49.395 ] 00:07:49.395 } 00:07:49.395 ] 00:07:49.395 } 00:07:49.395 [2024-11-08 07:35:07.212863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61067 ] 00:07:49.654 [2024-11-08 07:35:07.363126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.654 [2024-11-08 07:35:07.411045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.654 [2024-11-08 07:35:07.453838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.030  [2024-11-08T07:35:09.927Z] Copying: 205/512 [MB] (205 MBps) [2024-11-08T07:35:10.187Z] Copying: 397/512 [MB] (192 MBps) [2024-11-08T07:35:10.754Z] Copying: 512/512 [MB] (average 201 MBps) 00:07:52.793 00:07:52.793 07:35:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:52.793 07:35:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 55l0sb47fu5xfeh7kxmmymjclws20d2u6duh6jk2ab8jq4lu6rwc40gnys27rj83ihvr33zmdsozt9csppzmw0v5h9bpj17ylreid79leeuzxdon6ljrt42aoow02f0ntabqkq63sp23hbmd5fdzana3ohze3dq9jvvl2v4bklqspvefazya4yb27qn91dbznr67v640u3fokhln4vxz777j0axwau50nebx83gmwoax90vlz27h2g4kew3wintl501x9riushirfefon0s40048mdy3fcujo1ebnjd3n60uvln0e4q13u88ytx82yo4i28o5vxvpstde8m4rhvacipp5qw68yteyhzej3h1glm4owkwo8r2ai79y0q6t7rq80so9t12l78hrm5eleysfasec88jcqt9xcxkphuqhp7wcxk5r05rnkl6xhxv7zwzut6te244lfp5rec59wex92waist7vfos47miymdpjal2q31h53m3x5wo90iupwr8rg7nstu4k4q2wk7he5ib2topfftshrkupkke4d7lkxuo8cukq0vgj35fpjiobglm7tlq4bx2u80i7vz4fbonz21mn4t7mqh2yfjhgnbbu9fm7bx12obgbe4rqy8ryibr5zsxmngp3uqfvl56oizrxpe1x12ayvqi7cuycdp2e5hjpl9blqvwdgedi55o92liubcxz6uzmgmvrk6em4t1m1d3pnpzb8yephia0px3nm1aejkf6o0ce15g2a31temjsafzfg5haqhzprubonfgcc5y7taqdb5osy1qwj7ddq7v7rx25jdd0nmdzulg9gzjkdqtvl0qju6omjwihs6asb4z0cmeese5x708g8vs0hw0o2q2m3hivxed8rsz0rq0jqrajtw52cg2l68t6oy6td3blri1usb180kwwgi98c4t1pucdkgntadkwl8bxvl5vdmoyjzx60wad5hiz421ac189329rdvjoyn0hg2eull7ige1pcv6ab8rbpvthbc5 == \5\5\l\0\s\b\4\7\f\u\5\x\f\e\h\7\k\x\m\m\y\m\j\c\l\w\s\2\0\d\2\u\6\d\u\h\6\j\k\2\a\b\8\j\q\4\l\u\6\r\w\c\4\0\g\n\y\s\2\7\r\j\8\3\i\h\v\r\3\3\z\m\d\s\o\z\t\9\c\s\p\p\z\m\w\0\v\5\h\9\b\p\j\1\7\y\l\r\e\i\d\7\9\l\e\e\u\z\x\d\o\n\6\l\j\r\t\4\2\a\o\o\w\0\2\f\0\n\t\a\b\q\k\q\6\3\s\p\2\3\h\b\m\d\5\f\d\z\a\n\a\3\o\h\z\e\3\d\q\9\j\v\v\l\2\v\4\b\k\l\q\s\p\v\e\f\a\z\y\a\4\y\b\2\7\q\n\9\1\d\b\z\n\r\6\7\v\6\4\0\u\3\f\o\k\h\l\n\4\v\x\z\7\7\7\j\0\a\x\w\a\u\5\0\n\e\b\x\8\3\g\m\w\o\a\x\9\0\v\l\z\2\7\h\2\g\4\k\e\w\3\w\i\n\t\l\5\0\1\x\9\r\i\u\s\h\i\r\f\e\f\o\n\0\s\4\0\0\4\8\m\d\y\3\f\c\u\j\o\1\e\b\n\j\d\3\n\6\0\u\v\l\n\0\e\4\q\1\3\u\8\8\y\t\x\8\2\y\o\4\i\2\8\o\5\v\x\v\p\s\t\d\e\8\m\4\r\h\v\a\c\i\p\p\5\q\w\6\8\y\t\e\y\h\z\e\j\3\h\1\g\l\m\4\o\w\k\w\o\8\r\2\a\i\7\9\y\0\q\6\t\7\r\q\8\0\s\o\9\t\1\2\l\7\8\h\r\m\5\e\l\e\y\s\f\a\s\e\c\8\8\j\c\q\t\9\x\c\x\k\p\h\u\q\h\p\7\w\c\x\k\5\r\0\5\r\n\k\l\6\x\h\x\v\7\z\w\z\u\t\6\t\e\2\4\4\l\f\p\5\r\e\c\5\9\w\e\x\9\2\w\a\i\s\t\7\v\f\o\s\4\7\m\i\y\m\d\p\j\a\l\2\q\3\1\h\5\3\m\3\x\5\w\o\9\0\i\u\p\w\r\8\r\g\7\n\s\t\u\4\k\4\q\2\w\k\7\h\e\5\i\b\2\t\o\p\f\f\t\s\h\r\k\u\p\k\k\e\4\d\7\l\k\x\u\o\8\c\u\k\q\0\v\g\j\3\5\f\p\j\i\o\b\g\l\m\7\t\l\q\4\b\x\2\u\8\0\i\7\v\z\4\f\b\o\n\z\2\1\m\n\4\t\7\m\q\h\2\y\f\j\h\g\n\b\b\u\9\f\m\7\b\x\1\2\o\b\g\b\e\4\r\q\y\8\r\y\i\b\r\5\z\s\x\m\n\g\p\3\u\q\f\v\l\5\6\o\i\z\r\x\p\e\1\x\1\2\a\y\v\q\i\7\c\u\y\c\d\p\2\e\5\h\j\p\l\9\b\l\q\v\w\d\g\e\d\i\5\5\o\9\2\l\i\u\b\c\x\z\6\u\z\m\g\m\v\r\k\6\e\m\4\t\1\m\1\d\3\p\n\p\z\b\8\y\e\p\h\i\a\0\p\x\3\n\m\1\a\e\j\k\f\6\o\0\c\e\1\5\g\2\a\3\1\t\e\m\j\s\a\f\z\f\g\5\h\a\q\h\z\p\r\u\b\o\n\f\g\c\c\5\y\7\t\a\q\d\b\5\o\s\y\1\q\w\j\7\d\d\q\7\v\7\r\x\2\5\j\d\d\0\n\m\d\z\u\l\g\9\g\z\j\k\d\q\t\v\l\0\q\j\u\6\o\m\j\w\i\h\s\6\a\s\b\4\z\0\c\m\e\e\s\e\5\x\7\0\8\g\8\v\s\0\h\w\0\o\2\q\2\m\3\h\i\v\x\e\d\8\r\s\z\0\r\q\0\j\q\r\a\j\t\w\5\2\c\g\2\l\6\8\t\6\o\y\6\t\d\3\b\l\r\i\1\u\s\b\1\8\0\k\w\w\g\i\9\8\c\4\t\1\p\u\c\d\k\g\n\t\a\d\k\w\l\8\b\x\v\l\5\v\d\m\o\y\j\z\x\6\0\w\a\d\5\h\i\z\4\2\1\a\c\1\8\9\3\2\9\r\d\v\j\o\y\n\0\h\g\2\e\u\l\l\7\i\g\e\1\p\c\v\6\a\b\8\r\b\p\v\t\h\b\c\5 ]] 00:07:52.793 07:35:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:52.794 07:35:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 55l0sb47fu5xfeh7kxmmymjclws20d2u6duh6jk2ab8jq4lu6rwc40gnys27rj83ihvr33zmdsozt9csppzmw0v5h9bpj17ylreid79leeuzxdon6ljrt42aoow02f0ntabqkq63sp23hbmd5fdzana3ohze3dq9jvvl2v4bklqspvefazya4yb27qn91dbznr67v640u3fokhln4vxz777j0axwau50nebx83gmwoax90vlz27h2g4kew3wintl501x9riushirfefon0s40048mdy3fcujo1ebnjd3n60uvln0e4q13u88ytx82yo4i28o5vxvpstde8m4rhvacipp5qw68yteyhzej3h1glm4owkwo8r2ai79y0q6t7rq80so9t12l78hrm5eleysfasec88jcqt9xcxkphuqhp7wcxk5r05rnkl6xhxv7zwzut6te244lfp5rec59wex92waist7vfos47miymdpjal2q31h53m3x5wo90iupwr8rg7nstu4k4q2wk7he5ib2topfftshrkupkke4d7lkxuo8cukq0vgj35fpjiobglm7tlq4bx2u80i7vz4fbonz21mn4t7mqh2yfjhgnbbu9fm7bx12obgbe4rqy8ryibr5zsxmngp3uqfvl56oizrxpe1x12ayvqi7cuycdp2e5hjpl9blqvwdgedi55o92liubcxz6uzmgmvrk6em4t1m1d3pnpzb8yephia0px3nm1aejkf6o0ce15g2a31temjsafzfg5haqhzprubonfgcc5y7taqdb5osy1qwj7ddq7v7rx25jdd0nmdzulg9gzjkdqtvl0qju6omjwihs6asb4z0cmeese5x708g8vs0hw0o2q2m3hivxed8rsz0rq0jqrajtw52cg2l68t6oy6td3blri1usb180kwwgi98c4t1pucdkgntadkwl8bxvl5vdmoyjzx60wad5hiz421ac189329rdvjoyn0hg2eull7ige1pcv6ab8rbpvthbc5 == \5\5\l\0\s\b\4\7\f\u\5\x\f\e\h\7\k\x\m\m\y\m\j\c\l\w\s\2\0\d\2\u\6\d\u\h\6\j\k\2\a\b\8\j\q\4\l\u\6\r\w\c\4\0\g\n\y\s\2\7\r\j\8\3\i\h\v\r\3\3\z\m\d\s\o\z\t\9\c\s\p\p\z\m\w\0\v\5\h\9\b\p\j\1\7\y\l\r\e\i\d\7\9\l\e\e\u\z\x\d\o\n\6\l\j\r\t\4\2\a\o\o\w\0\2\f\0\n\t\a\b\q\k\q\6\3\s\p\2\3\h\b\m\d\5\f\d\z\a\n\a\3\o\h\z\e\3\d\q\9\j\v\v\l\2\v\4\b\k\l\q\s\p\v\e\f\a\z\y\a\4\y\b\2\7\q\n\9\1\d\b\z\n\r\6\7\v\6\4\0\u\3\f\o\k\h\l\n\4\v\x\z\7\7\7\j\0\a\x\w\a\u\5\0\n\e\b\x\8\3\g\m\w\o\a\x\9\0\v\l\z\2\7\h\2\g\4\k\e\w\3\w\i\n\t\l\5\0\1\x\9\r\i\u\s\h\i\r\f\e\f\o\n\0\s\4\0\0\4\8\m\d\y\3\f\c\u\j\o\1\e\b\n\j\d\3\n\6\0\u\v\l\n\0\e\4\q\1\3\u\8\8\y\t\x\8\2\y\o\4\i\2\8\o\5\v\x\v\p\s\t\d\e\8\m\4\r\h\v\a\c\i\p\p\5\q\w\6\8\y\t\e\y\h\z\e\j\3\h\1\g\l\m\4\o\w\k\w\o\8\r\2\a\i\7\9\y\0\q\6\t\7\r\q\8\0\s\o\9\t\1\2\l\7\8\h\r\m\5\e\l\e\y\s\f\a\s\e\c\8\8\j\c\q\t\9\x\c\x\k\p\h\u\q\h\p\7\w\c\x\k\5\r\0\5\r\n\k\l\6\x\h\x\v\7\z\w\z\u\t\6\t\e\2\4\4\l\f\p\5\r\e\c\5\9\w\e\x\9\2\w\a\i\s\t\7\v\f\o\s\4\7\m\i\y\m\d\p\j\a\l\2\q\3\1\h\5\3\m\3\x\5\w\o\9\0\i\u\p\w\r\8\r\g\7\n\s\t\u\4\k\4\q\2\w\k\7\h\e\5\i\b\2\t\o\p\f\f\t\s\h\r\k\u\p\k\k\e\4\d\7\l\k\x\u\o\8\c\u\k\q\0\v\g\j\3\5\f\p\j\i\o\b\g\l\m\7\t\l\q\4\b\x\2\u\8\0\i\7\v\z\4\f\b\o\n\z\2\1\m\n\4\t\7\m\q\h\2\y\f\j\h\g\n\b\b\u\9\f\m\7\b\x\1\2\o\b\g\b\e\4\r\q\y\8\r\y\i\b\r\5\z\s\x\m\n\g\p\3\u\q\f\v\l\5\6\o\i\z\r\x\p\e\1\x\1\2\a\y\v\q\i\7\c\u\y\c\d\p\2\e\5\h\j\p\l\9\b\l\q\v\w\d\g\e\d\i\5\5\o\9\2\l\i\u\b\c\x\z\6\u\z\m\g\m\v\r\k\6\e\m\4\t\1\m\1\d\3\p\n\p\z\b\8\y\e\p\h\i\a\0\p\x\3\n\m\1\a\e\j\k\f\6\o\0\c\e\1\5\g\2\a\3\1\t\e\m\j\s\a\f\z\f\g\5\h\a\q\h\z\p\r\u\b\o\n\f\g\c\c\5\y\7\t\a\q\d\b\5\o\s\y\1\q\w\j\7\d\d\q\7\v\7\r\x\2\5\j\d\d\0\n\m\d\z\u\l\g\9\g\z\j\k\d\q\t\v\l\0\q\j\u\6\o\m\j\w\i\h\s\6\a\s\b\4\z\0\c\m\e\e\s\e\5\x\7\0\8\g\8\v\s\0\h\w\0\o\2\q\2\m\3\h\i\v\x\e\d\8\r\s\z\0\r\q\0\j\q\r\a\j\t\w\5\2\c\g\2\l\6\8\t\6\o\y\6\t\d\3\b\l\r\i\1\u\s\b\1\8\0\k\w\w\g\i\9\8\c\4\t\1\p\u\c\d\k\g\n\t\a\d\k\w\l\8\b\x\v\l\5\v\d\m\o\y\j\z\x\6\0\w\a\d\5\h\i\z\4\2\1\a\c\1\8\9\3\2\9\r\d\v\j\o\y\n\0\h\g\2\e\u\l\l\7\i\g\e\1\p\c\v\6\a\b\8\r\b\p\v\t\h\b\c\5 ]] 00:07:52.794 07:35:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:53.052 07:35:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:53.052 07:35:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:53.052 07:35:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:53.052 07:35:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:53.052 [2024-11-08 07:35:10.958037] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:53.052 [2024-11-08 07:35:10.958228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61123 ] 00:07:53.052 { 00:07:53.052 "subsystems": [ 00:07:53.052 { 00:07:53.052 "subsystem": "bdev", 00:07:53.052 "config": [ 00:07:53.052 { 00:07:53.052 "params": { 00:07:53.052 "block_size": 512, 00:07:53.052 "num_blocks": 1048576, 00:07:53.052 "name": "malloc0" 00:07:53.052 }, 00:07:53.052 "method": "bdev_malloc_create" 00:07:53.052 }, 00:07:53.052 { 00:07:53.052 "params": { 00:07:53.052 "filename": "/dev/zram1", 00:07:53.052 "name": "uring0" 00:07:53.052 }, 00:07:53.052 "method": "bdev_uring_create" 00:07:53.052 }, 00:07:53.052 { 00:07:53.052 "method": "bdev_wait_for_examine" 00:07:53.052 } 00:07:53.052 ] 00:07:53.052 } 00:07:53.052 ] 00:07:53.052 } 00:07:53.311 [2024-11-08 07:35:11.099337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.311 [2024-11-08 07:35:11.147677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.311 [2024-11-08 07:35:11.189834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.692  [2024-11-08T07:35:13.588Z] Copying: 192/512 [MB] (192 MBps) [2024-11-08T07:35:14.158Z] Copying: 382/512 [MB] (190 MBps) [2024-11-08T07:35:14.418Z] Copying: 512/512 [MB] (average 191 MBps) 00:07:56.457 00:07:56.457 07:35:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:56.457 07:35:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:56.457 07:35:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:56.457 07:35:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:56.457 07:35:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:56.457 07:35:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:56.457 07:35:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:56.457 07:35:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:56.457 [2024-11-08 07:35:14.397366] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:56.457 [2024-11-08 07:35:14.397445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61168 ] 00:07:56.718 { 00:07:56.718 "subsystems": [ 00:07:56.718 { 00:07:56.718 "subsystem": "bdev", 00:07:56.718 "config": [ 00:07:56.718 { 00:07:56.718 "params": { 00:07:56.718 "block_size": 512, 00:07:56.718 "num_blocks": 1048576, 00:07:56.718 "name": "malloc0" 00:07:56.718 }, 00:07:56.718 "method": "bdev_malloc_create" 00:07:56.718 }, 00:07:56.718 { 00:07:56.718 "params": { 00:07:56.718 "filename": "/dev/zram1", 00:07:56.718 "name": "uring0" 00:07:56.718 }, 00:07:56.718 "method": "bdev_uring_create" 00:07:56.718 }, 00:07:56.718 { 00:07:56.718 "params": { 00:07:56.718 "name": "uring0" 00:07:56.718 }, 00:07:56.718 "method": "bdev_uring_delete" 00:07:56.718 }, 00:07:56.718 { 00:07:56.718 "method": "bdev_wait_for_examine" 00:07:56.718 } 00:07:56.718 ] 00:07:56.718 } 00:07:56.718 ] 00:07:56.718 } 00:07:56.718 [2024-11-08 07:35:14.549228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.718 [2024-11-08 07:35:14.610147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.718 [2024-11-08 07:35:14.658483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.978  [2024-11-08T07:35:15.199Z] Copying: 0/0 [B] (average 0 Bps) 00:07:57.238 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.238 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:57.497 { 00:07:57.497 "subsystems": [ 00:07:57.498 { 00:07:57.498 "subsystem": "bdev", 00:07:57.498 "config": [ 00:07:57.498 { 00:07:57.498 "params": { 00:07:57.498 "block_size": 512, 00:07:57.498 "num_blocks": 1048576, 00:07:57.498 "name": "malloc0" 00:07:57.498 }, 00:07:57.498 "method": "bdev_malloc_create" 00:07:57.498 }, 00:07:57.498 { 00:07:57.498 "params": { 00:07:57.498 "filename": "/dev/zram1", 00:07:57.498 "name": "uring0" 00:07:57.498 }, 00:07:57.498 "method": "bdev_uring_create" 00:07:57.498 }, 00:07:57.498 { 00:07:57.498 "params": { 00:07:57.498 "name": "uring0" 00:07:57.498 }, 00:07:57.498 "method": "bdev_uring_delete" 00:07:57.498 }, 00:07:57.498 { 00:07:57.498 "method": "bdev_wait_for_examine" 00:07:57.498 } 00:07:57.498 ] 00:07:57.498 } 00:07:57.498 ] 00:07:57.498 } 00:07:57.498 [2024-11-08 07:35:15.228132] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:57.498 [2024-11-08 07:35:15.228246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61204 ] 00:07:57.498 [2024-11-08 07:35:15.378935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.498 [2024-11-08 07:35:15.424815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.757 [2024-11-08 07:35:15.469543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.757 [2024-11-08 07:35:15.634166] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:57.757 [2024-11-08 07:35:15.634403] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:57.757 [2024-11-08 07:35:15.634419] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:57.757 [2024-11-08 07:35:15.634431] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.016 [2024-11-08 07:35:15.882455] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:58.016 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:07:58.016 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.016 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:07:58.016 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:07:58.016 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:07:58.016 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.016 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:58.016 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:58.016 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:58.017 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:58.017 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:58.017 07:35:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:58.585 00:07:58.585 real 0m13.037s 00:07:58.585 user 0m8.388s 00:07:58.585 sys 0m11.335s 00:07:58.585 07:35:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.585 ************************************ 00:07:58.585 END TEST dd_uring_copy 00:07:58.585 ************************************ 00:07:58.585 07:35:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:58.585 ************************************ 00:07:58.585 END TEST spdk_dd_uring 00:07:58.585 ************************************ 00:07:58.585 00:07:58.585 real 0m13.306s 00:07:58.585 user 0m8.530s 00:07:58.585 sys 0m11.465s 00:07:58.585 07:35:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.585 07:35:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:58.585 07:35:16 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:58.585 07:35:16 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:58.585 07:35:16 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.585 07:35:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:58.585 ************************************ 00:07:58.585 START TEST spdk_dd_sparse 00:07:58.585 ************************************ 00:07:58.585 07:35:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:58.585 * Looking for test storage... 00:07:58.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:58.585 07:35:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:58.585 07:35:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lcov --version 00:07:58.585 07:35:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:58.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.845 --rc genhtml_branch_coverage=1 00:07:58.845 --rc genhtml_function_coverage=1 00:07:58.845 --rc genhtml_legend=1 00:07:58.845 --rc geninfo_all_blocks=1 00:07:58.845 --rc geninfo_unexecuted_blocks=1 00:07:58.845 00:07:58.845 ' 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:58.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.845 --rc genhtml_branch_coverage=1 00:07:58.845 --rc genhtml_function_coverage=1 00:07:58.845 --rc genhtml_legend=1 00:07:58.845 --rc geninfo_all_blocks=1 00:07:58.845 --rc geninfo_unexecuted_blocks=1 00:07:58.845 00:07:58.845 ' 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:58.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.845 --rc genhtml_branch_coverage=1 00:07:58.845 --rc genhtml_function_coverage=1 00:07:58.845 --rc genhtml_legend=1 00:07:58.845 --rc geninfo_all_blocks=1 00:07:58.845 --rc geninfo_unexecuted_blocks=1 00:07:58.845 00:07:58.845 ' 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:58.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.845 --rc genhtml_branch_coverage=1 00:07:58.845 --rc genhtml_function_coverage=1 00:07:58.845 --rc genhtml_legend=1 00:07:58.845 --rc geninfo_all_blocks=1 00:07:58.845 --rc geninfo_unexecuted_blocks=1 00:07:58.845 00:07:58.845 ' 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:58.845 1+0 records in 00:07:58.845 1+0 records out 00:07:58.845 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00486561 s, 862 MB/s 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:58.845 1+0 records in 00:07:58.845 1+0 records out 00:07:58.845 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00723623 s, 580 MB/s 00:07:58.845 07:35:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:58.845 1+0 records in 00:07:58.845 1+0 records out 00:07:58.846 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00468775 s, 895 MB/s 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:58.846 ************************************ 00:07:58.846 START TEST dd_sparse_file_to_file 00:07:58.846 ************************************ 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1127 -- # file_to_file 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:58.846 07:35:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:58.846 { 00:07:58.846 "subsystems": [ 00:07:58.846 { 00:07:58.846 "subsystem": "bdev", 00:07:58.846 "config": [ 00:07:58.846 { 00:07:58.846 "params": { 00:07:58.846 "block_size": 4096, 00:07:58.846 "filename": "dd_sparse_aio_disk", 00:07:58.846 "name": "dd_aio" 00:07:58.846 }, 00:07:58.846 "method": "bdev_aio_create" 00:07:58.846 }, 00:07:58.846 { 00:07:58.846 "params": { 00:07:58.846 "lvs_name": "dd_lvstore", 00:07:58.846 "bdev_name": "dd_aio" 00:07:58.846 }, 00:07:58.846 "method": "bdev_lvol_create_lvstore" 00:07:58.846 }, 00:07:58.846 { 00:07:58.846 "method": "bdev_wait_for_examine" 00:07:58.846 } 00:07:58.846 ] 00:07:58.846 } 00:07:58.846 ] 00:07:58.846 } 00:07:58.846 [2024-11-08 07:35:16.682836] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:58.846 [2024-11-08 07:35:16.683113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61304 ] 00:07:59.105 [2024-11-08 07:35:16.842302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.105 [2024-11-08 07:35:16.902520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.105 [2024-11-08 07:35:16.950788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.105  [2024-11-08T07:35:17.326Z] Copying: 12/36 [MB] (average 750 MBps) 00:07:59.365 00:07:59.365 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:59.365 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:59.365 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:59.365 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:59.365 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:59.365 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:59.365 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:59.365 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:59.365 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:59.365 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:59.365 00:07:59.365 real 0m0.648s 00:07:59.365 user 0m0.378s 00:07:59.365 sys 0m0.353s 00:07:59.365 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.365 ************************************ 00:07:59.365 END TEST dd_sparse_file_to_file 00:07:59.365 ************************************ 00:07:59.365 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:59.625 07:35:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:59.625 07:35:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:59.625 07:35:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:59.625 07:35:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:59.625 ************************************ 00:07:59.625 START TEST dd_sparse_file_to_bdev 00:07:59.625 ************************************ 00:07:59.625 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1127 -- # file_to_bdev 00:07:59.625 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:59.625 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:59.625 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:59.625 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:59.625 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:59.625 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:59.625 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:59.625 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:59.625 [2024-11-08 07:35:17.382402] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:07:59.625 [2024-11-08 07:35:17.382468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61347 ] 00:07:59.625 { 00:07:59.625 "subsystems": [ 00:07:59.625 { 00:07:59.625 "subsystem": "bdev", 00:07:59.625 "config": [ 00:07:59.625 { 00:07:59.625 "params": { 00:07:59.625 "block_size": 4096, 00:07:59.625 "filename": "dd_sparse_aio_disk", 00:07:59.625 "name": "dd_aio" 00:07:59.625 }, 00:07:59.625 "method": "bdev_aio_create" 00:07:59.625 }, 00:07:59.625 { 00:07:59.625 "params": { 00:07:59.625 "lvs_name": "dd_lvstore", 00:07:59.625 "lvol_name": "dd_lvol", 00:07:59.625 "size_in_mib": 36, 00:07:59.625 "thin_provision": true 00:07:59.625 }, 00:07:59.625 "method": "bdev_lvol_create" 00:07:59.625 }, 00:07:59.625 { 00:07:59.625 "method": "bdev_wait_for_examine" 00:07:59.625 } 00:07:59.625 ] 00:07:59.625 } 00:07:59.625 ] 00:07:59.625 } 00:07:59.625 [2024-11-08 07:35:17.532036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.884 [2024-11-08 07:35:17.594212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.884 [2024-11-08 07:35:17.642431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.884  [2024-11-08T07:35:18.105Z] Copying: 12/36 [MB] (average 500 MBps) 00:08:00.144 00:08:00.144 ************************************ 00:08:00.144 END TEST dd_sparse_file_to_bdev 00:08:00.144 ************************************ 00:08:00.144 00:08:00.144 real 0m0.573s 00:08:00.144 user 0m0.355s 00:08:00.144 sys 0m0.302s 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:00.144 ************************************ 00:08:00.144 START TEST dd_sparse_bdev_to_file 00:08:00.144 ************************************ 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1127 -- # bdev_to_file 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:00.144 07:35:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:00.144 { 00:08:00.144 "subsystems": [ 00:08:00.144 { 00:08:00.144 "subsystem": "bdev", 00:08:00.144 "config": [ 00:08:00.144 { 00:08:00.144 "params": { 00:08:00.144 "block_size": 4096, 00:08:00.144 "filename": "dd_sparse_aio_disk", 00:08:00.144 "name": "dd_aio" 00:08:00.144 }, 00:08:00.144 "method": "bdev_aio_create" 00:08:00.144 }, 00:08:00.144 { 00:08:00.144 "method": "bdev_wait_for_examine" 00:08:00.144 } 00:08:00.144 ] 00:08:00.144 } 00:08:00.144 ] 00:08:00.144 } 00:08:00.144 [2024-11-08 07:35:18.022617] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:00.144 [2024-11-08 07:35:18.022852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61385 ] 00:08:00.403 [2024-11-08 07:35:18.174520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.403 [2024-11-08 07:35:18.226028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.403 [2024-11-08 07:35:18.267505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.403  [2024-11-08T07:35:18.623Z] Copying: 12/36 [MB] (average 857 MBps) 00:08:00.662 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:00.662 ************************************ 00:08:00.662 END TEST dd_sparse_bdev_to_file 00:08:00.662 ************************************ 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:00.662 00:08:00.662 real 0m0.576s 00:08:00.662 user 0m0.334s 00:08:00.662 sys 0m0.317s 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:00.662 07:35:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:00.922 ************************************ 00:08:00.922 END TEST spdk_dd_sparse 00:08:00.922 ************************************ 00:08:00.922 00:08:00.922 real 0m2.247s 00:08:00.922 user 0m1.261s 00:08:00.922 sys 0m1.226s 00:08:00.922 07:35:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.922 07:35:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:00.922 07:35:18 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:00.922 07:35:18 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:00.922 07:35:18 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.922 07:35:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:00.922 ************************************ 00:08:00.922 START TEST spdk_dd_negative 00:08:00.922 ************************************ 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:00.922 * Looking for test storage... 00:08:00.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lcov --version 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:00.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.922 --rc genhtml_branch_coverage=1 00:08:00.922 --rc genhtml_function_coverage=1 00:08:00.922 --rc genhtml_legend=1 00:08:00.922 --rc geninfo_all_blocks=1 00:08:00.922 --rc geninfo_unexecuted_blocks=1 00:08:00.922 00:08:00.922 ' 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:00.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.922 --rc genhtml_branch_coverage=1 00:08:00.922 --rc genhtml_function_coverage=1 00:08:00.922 --rc genhtml_legend=1 00:08:00.922 --rc geninfo_all_blocks=1 00:08:00.922 --rc geninfo_unexecuted_blocks=1 00:08:00.922 00:08:00.922 ' 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:00.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.922 --rc genhtml_branch_coverage=1 00:08:00.922 --rc genhtml_function_coverage=1 00:08:00.922 --rc genhtml_legend=1 00:08:00.922 --rc geninfo_all_blocks=1 00:08:00.922 --rc geninfo_unexecuted_blocks=1 00:08:00.922 00:08:00.922 ' 00:08:00.922 07:35:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:00.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.923 --rc genhtml_branch_coverage=1 00:08:00.923 --rc genhtml_function_coverage=1 00:08:00.923 --rc genhtml_legend=1 00:08:00.923 --rc geninfo_all_blocks=1 00:08:00.923 --rc geninfo_unexecuted_blocks=1 00:08:00.923 00:08:00.923 ' 00:08:00.923 07:35:18 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:00.923 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:00.923 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.923 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.923 07:35:18 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.923 07:35:18 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.923 07:35:18 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.923 07:35:18 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.923 07:35:18 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:00.923 07:35:18 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.923 07:35:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:00.923 07:35:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.923 07:35:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:01.183 ************************************ 00:08:01.183 START TEST dd_invalid_arguments 00:08:01.183 ************************************ 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1127 -- # invalid_arguments 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:01.183 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:01.183 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:01.183 00:08:01.183 CPU options: 00:08:01.183 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:01.183 (like [0,1,10]) 00:08:01.183 --lcores lcore to CPU mapping list. The list is in the format: 00:08:01.183 [<,lcores[@CPUs]>...] 00:08:01.183 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:01.183 Within the group, '-' is used for range separator, 00:08:01.183 ',' is used for single number separator. 00:08:01.184 '( )' can be omitted for single element group, 00:08:01.184 '@' can be omitted if cpus and lcores have the same value 00:08:01.184 --disable-cpumask-locks Disable CPU core lock files. 00:08:01.184 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:01.184 pollers in the app support interrupt mode) 00:08:01.184 -p, --main-core main (primary) core for DPDK 00:08:01.184 00:08:01.184 Configuration options: 00:08:01.184 -c, --config, --json JSON config file 00:08:01.184 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:01.184 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:01.184 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:01.184 --rpcs-allowed comma-separated list of permitted RPCS 00:08:01.184 --json-ignore-init-errors don't exit on invalid config entry 00:08:01.184 00:08:01.184 Memory options: 00:08:01.184 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:01.184 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:01.184 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:01.184 -R, --huge-unlink unlink huge files after initialization 00:08:01.184 -n, --mem-channels number of memory channels used for DPDK 00:08:01.184 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:01.184 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:01.184 --no-huge run without using hugepages 00:08:01.184 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:01.184 -i, --shm-id shared memory ID (optional) 00:08:01.184 -g, --single-file-segments force creating just one hugetlbfs file 00:08:01.184 00:08:01.184 PCI options: 00:08:01.184 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:01.184 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:01.184 -u, --no-pci disable PCI access 00:08:01.184 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:01.184 00:08:01.184 Log options: 00:08:01.184 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:01.184 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:01.184 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:01.184 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:01.184 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:08:01.184 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:08:01.184 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:08:01.184 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:08:01.184 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:08:01.184 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:08:01.184 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:01.184 --silence-noticelog disable notice level logging to stderr 00:08:01.184 00:08:01.184 Trace options: 00:08:01.184 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:01.184 setting 0 to disable trace (default 32768) 00:08:01.184 Tracepoints vary in size and can use more than one trace entry. 00:08:01.184 -e, --tpoint-group [:] 00:08:01.184 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:01.184 [2024-11-08 07:35:18.955023] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:01.184 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:01.184 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:08:01.184 bdev_raid, scheduler, all). 00:08:01.184 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:01.184 a tracepoint group. First tpoint inside a group can be enabled by 00:08:01.184 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:01.184 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:01.184 in /include/spdk_internal/trace_defs.h 00:08:01.184 00:08:01.184 Other options: 00:08:01.184 -h, --help show this usage 00:08:01.184 -v, --version print SPDK version 00:08:01.184 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:01.184 --env-context Opaque context for use of the env implementation 00:08:01.184 00:08:01.184 Application specific: 00:08:01.184 [--------- DD Options ---------] 00:08:01.184 --if Input file. Must specify either --if or --ib. 00:08:01.184 --ib Input bdev. Must specifier either --if or --ib 00:08:01.184 --of Output file. Must specify either --of or --ob. 00:08:01.184 --ob Output bdev. Must specify either --of or --ob. 00:08:01.184 --iflag Input file flags. 00:08:01.184 --oflag Output file flags. 00:08:01.184 --bs I/O unit size (default: 4096) 00:08:01.184 --qd Queue depth (default: 2) 00:08:01.184 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:01.184 --skip Skip this many I/O units at start of input. (default: 0) 00:08:01.184 --seek Skip this many I/O units at start of output. (default: 0) 00:08:01.184 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:01.184 --sparse Enable hole skipping in input target 00:08:01.184 Available iflag and oflag values: 00:08:01.184 append - append mode 00:08:01.184 direct - use direct I/O for data 00:08:01.184 directory - fail unless a directory 00:08:01.184 dsync - use synchronized I/O for data 00:08:01.184 noatime - do not update access time 00:08:01.184 noctty - do not assign controlling terminal from file 00:08:01.184 nofollow - do not follow symlinks 00:08:01.184 nonblock - use non-blocking I/O 00:08:01.184 sync - use synchronized I/O for data and metadata 00:08:01.184 ************************************ 00:08:01.184 END TEST dd_invalid_arguments 00:08:01.184 ************************************ 00:08:01.184 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:08:01.184 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:01.184 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:01.184 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:01.184 00:08:01.184 real 0m0.079s 00:08:01.184 user 0m0.043s 00:08:01.184 sys 0m0.034s 00:08:01.184 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.184 07:35:18 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:01.184 07:35:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:01.184 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:01.184 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:01.185 ************************************ 00:08:01.185 START TEST dd_double_input 00:08:01.185 ************************************ 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1127 -- # double_input 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:01.185 [2024-11-08 07:35:19.096588] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:01.185 00:08:01.185 real 0m0.079s 00:08:01.185 user 0m0.046s 00:08:01.185 sys 0m0.031s 00:08:01.185 ************************************ 00:08:01.185 END TEST dd_double_input 00:08:01.185 ************************************ 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.185 07:35:19 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:01.445 ************************************ 00:08:01.445 START TEST dd_double_output 00:08:01.445 ************************************ 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1127 -- # double_output 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:01.445 [2024-11-08 07:35:19.236101] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:01.445 00:08:01.445 real 0m0.083s 00:08:01.445 user 0m0.038s 00:08:01.445 sys 0m0.044s 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.445 ************************************ 00:08:01.445 END TEST dd_double_output 00:08:01.445 ************************************ 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:01.445 ************************************ 00:08:01.445 START TEST dd_no_input 00:08:01.445 ************************************ 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1127 -- # no_input 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.445 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.446 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:01.446 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:01.446 [2024-11-08 07:35:19.377662] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:01.446 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:08:01.446 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:01.446 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:01.446 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:01.446 ************************************ 00:08:01.446 END TEST dd_no_input 00:08:01.446 ************************************ 00:08:01.446 00:08:01.446 real 0m0.079s 00:08:01.446 user 0m0.046s 00:08:01.446 sys 0m0.032s 00:08:01.446 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.446 07:35:19 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:01.706 ************************************ 00:08:01.706 START TEST dd_no_output 00:08:01.706 ************************************ 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1127 -- # no_output 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:01.706 [2024-11-08 07:35:19.519186] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:01.706 00:08:01.706 real 0m0.079s 00:08:01.706 user 0m0.048s 00:08:01.706 sys 0m0.030s 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.706 ************************************ 00:08:01.706 END TEST dd_no_output 00:08:01.706 ************************************ 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:01.706 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:01.707 ************************************ 00:08:01.707 START TEST dd_wrong_blocksize 00:08:01.707 ************************************ 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1127 -- # wrong_blocksize 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:01.707 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:01.966 [2024-11-08 07:35:19.666505] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:01.966 00:08:01.966 real 0m0.084s 00:08:01.966 user 0m0.049s 00:08:01.966 sys 0m0.033s 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.966 ************************************ 00:08:01.966 END TEST dd_wrong_blocksize 00:08:01.966 ************************************ 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:01.966 ************************************ 00:08:01.966 START TEST dd_smaller_blocksize 00:08:01.966 ************************************ 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1127 -- # smaller_blocksize 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:01.966 07:35:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:01.966 [2024-11-08 07:35:19.806933] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:01.966 [2024-11-08 07:35:19.807041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61611 ] 00:08:02.226 [2024-11-08 07:35:19.964249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.226 [2024-11-08 07:35:20.021417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.226 [2024-11-08 07:35:20.068997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.484 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:02.744 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:02.744 [2024-11-08 07:35:20.660590] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:02.744 [2024-11-08 07:35:20.660644] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.002 [2024-11-08 07:35:20.755187] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:03.002 07:35:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:08:03.002 07:35:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:03.002 07:35:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:08:03.002 ************************************ 00:08:03.002 END TEST dd_smaller_blocksize 00:08:03.002 ************************************ 00:08:03.002 07:35:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:08:03.002 07:35:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:08:03.002 07:35:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:03.002 00:08:03.002 real 0m1.068s 00:08:03.002 user 0m0.383s 00:08:03.002 sys 0m0.577s 00:08:03.002 07:35:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.002 07:35:20 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:03.002 07:35:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:03.002 07:35:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:03.002 07:35:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:03.002 07:35:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:03.002 ************************************ 00:08:03.002 START TEST dd_invalid_count 00:08:03.002 ************************************ 00:08:03.002 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1127 -- # invalid_count 00:08:03.002 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:03.003 [2024-11-08 07:35:20.927282] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:03.003 00:08:03.003 real 0m0.076s 00:08:03.003 user 0m0.049s 00:08:03.003 sys 0m0.026s 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.003 ************************************ 00:08:03.003 END TEST dd_invalid_count 00:08:03.003 ************************************ 00:08:03.003 07:35:20 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:03.260 07:35:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:03.261 07:35:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:03.261 07:35:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:03.261 07:35:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:03.261 ************************************ 00:08:03.261 START TEST dd_invalid_oflag 00:08:03.261 ************************************ 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1127 -- # invalid_oflag 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:03.261 [2024-11-08 07:35:21.065941] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:03.261 00:08:03.261 real 0m0.078s 00:08:03.261 user 0m0.044s 00:08:03.261 sys 0m0.032s 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.261 ************************************ 00:08:03.261 END TEST dd_invalid_oflag 00:08:03.261 ************************************ 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:03.261 ************************************ 00:08:03.261 START TEST dd_invalid_iflag 00:08:03.261 ************************************ 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1127 -- # invalid_iflag 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.261 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:03.261 [2024-11-08 07:35:21.205897] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:03.520 00:08:03.520 real 0m0.078s 00:08:03.520 user 0m0.039s 00:08:03.520 sys 0m0.038s 00:08:03.520 ************************************ 00:08:03.520 END TEST dd_invalid_iflag 00:08:03.520 ************************************ 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:03.520 ************************************ 00:08:03.520 START TEST dd_unknown_flag 00:08:03.520 ************************************ 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1127 -- # unknown_flag 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.520 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:03.520 [2024-11-08 07:35:21.351434] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:03.520 [2024-11-08 07:35:21.351526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61709 ] 00:08:03.780 [2024-11-08 07:35:21.502440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.780 [2024-11-08 07:35:21.553326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.780 [2024-11-08 07:35:21.594806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.780 [2024-11-08 07:35:21.622682] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:03.780 [2024-11-08 07:35:21.622920] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.780 [2024-11-08 07:35:21.623000] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:03.780 [2024-11-08 07:35:21.623012] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.780 [2024-11-08 07:35:21.623203] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:03.780 [2024-11-08 07:35:21.623216] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.780 [2024-11-08 07:35:21.623269] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:03.780 [2024-11-08 07:35:21.623277] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:03.780 [2024-11-08 07:35:21.716193] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:08:04.040 ************************************ 00:08:04.040 END TEST dd_unknown_flag 00:08:04.040 ************************************ 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.040 00:08:04.040 real 0m0.485s 00:08:04.040 user 0m0.252s 00:08:04.040 sys 0m0.137s 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.040 ************************************ 00:08:04.040 START TEST dd_invalid_json 00:08:04.040 ************************************ 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1127 -- # invalid_json 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.040 07:35:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:04.040 [2024-11-08 07:35:21.894754] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:04.040 [2024-11-08 07:35:21.894850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61737 ] 00:08:04.300 [2024-11-08 07:35:22.040041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.300 [2024-11-08 07:35:22.091256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.300 [2024-11-08 07:35:22.091319] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:04.300 [2024-11-08 07:35:22.091334] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:04.300 [2024-11-08 07:35:22.091343] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.300 [2024-11-08 07:35:22.091375] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:04.300 ************************************ 00:08:04.300 END TEST dd_invalid_json 00:08:04.300 ************************************ 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.300 00:08:04.300 real 0m0.313s 00:08:04.300 user 0m0.143s 00:08:04.300 sys 0m0.069s 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.300 ************************************ 00:08:04.300 START TEST dd_invalid_seek 00:08:04.300 ************************************ 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1127 -- # invalid_seek 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.300 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:04.560 [2024-11-08 07:35:22.260682] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:04.560 [2024-11-08 07:35:22.260874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61767 ] 00:08:04.560 { 00:08:04.560 "subsystems": [ 00:08:04.560 { 00:08:04.560 "subsystem": "bdev", 00:08:04.560 "config": [ 00:08:04.560 { 00:08:04.560 "params": { 00:08:04.560 "block_size": 512, 00:08:04.560 "num_blocks": 512, 00:08:04.560 "name": "malloc0" 00:08:04.560 }, 00:08:04.560 "method": "bdev_malloc_create" 00:08:04.560 }, 00:08:04.560 { 00:08:04.560 "params": { 00:08:04.560 "block_size": 512, 00:08:04.560 "num_blocks": 512, 00:08:04.560 "name": "malloc1" 00:08:04.560 }, 00:08:04.560 "method": "bdev_malloc_create" 00:08:04.560 }, 00:08:04.560 { 00:08:04.560 "method": "bdev_wait_for_examine" 00:08:04.560 } 00:08:04.560 ] 00:08:04.560 } 00:08:04.560 ] 00:08:04.560 } 00:08:04.560 [2024-11-08 07:35:22.397177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.560 [2024-11-08 07:35:22.442142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.560 [2024-11-08 07:35:22.484184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.820 [2024-11-08 07:35:22.537929] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:04.820 [2024-11-08 07:35:22.537997] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.821 [2024-11-08 07:35:22.632973] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:08:04.821 ************************************ 00:08:04.821 END TEST dd_invalid_seek 00:08:04.821 ************************************ 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.821 00:08:04.821 real 0m0.480s 00:08:04.821 user 0m0.299s 00:08:04.821 sys 0m0.143s 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.821 ************************************ 00:08:04.821 START TEST dd_invalid_skip 00:08:04.821 ************************************ 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1127 -- # invalid_skip 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.821 07:35:22 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:05.080 { 00:08:05.080 "subsystems": [ 00:08:05.080 { 00:08:05.080 "subsystem": "bdev", 00:08:05.080 "config": [ 00:08:05.080 { 00:08:05.080 "params": { 00:08:05.080 "block_size": 512, 00:08:05.080 "num_blocks": 512, 00:08:05.080 "name": "malloc0" 00:08:05.080 }, 00:08:05.080 "method": "bdev_malloc_create" 00:08:05.080 }, 00:08:05.080 { 00:08:05.080 "params": { 00:08:05.080 "block_size": 512, 00:08:05.080 "num_blocks": 512, 00:08:05.080 "name": "malloc1" 00:08:05.080 }, 00:08:05.080 "method": "bdev_malloc_create" 00:08:05.080 }, 00:08:05.080 { 00:08:05.080 "method": "bdev_wait_for_examine" 00:08:05.080 } 00:08:05.080 ] 00:08:05.080 } 00:08:05.080 ] 00:08:05.080 } 00:08:05.080 [2024-11-08 07:35:22.819114] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:05.080 [2024-11-08 07:35:22.819817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61795 ] 00:08:05.080 [2024-11-08 07:35:22.969817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.080 [2024-11-08 07:35:23.022182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.339 [2024-11-08 07:35:23.064088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.340 [2024-11-08 07:35:23.118567] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:05.340 [2024-11-08 07:35:23.118796] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.340 [2024-11-08 07:35:23.213791] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:05.340 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:08:05.340 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:05.340 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:08:05.340 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:08:05.340 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:08:05.340 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:05.340 00:08:05.340 real 0m0.518s 00:08:05.340 user 0m0.325s 00:08:05.340 sys 0m0.151s 00:08:05.340 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:05.340 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:05.340 ************************************ 00:08:05.340 END TEST dd_invalid_skip 00:08:05.340 ************************************ 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.600 ************************************ 00:08:05.600 START TEST dd_invalid_input_count 00:08:05.600 ************************************ 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1127 -- # invalid_input_count 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.600 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:05.600 { 00:08:05.600 "subsystems": [ 00:08:05.600 { 00:08:05.600 "subsystem": "bdev", 00:08:05.600 "config": [ 00:08:05.600 { 00:08:05.600 "params": { 00:08:05.600 "block_size": 512, 00:08:05.600 "num_blocks": 512, 00:08:05.600 "name": "malloc0" 00:08:05.600 }, 00:08:05.600 "method": "bdev_malloc_create" 00:08:05.600 }, 00:08:05.600 { 00:08:05.600 "params": { 00:08:05.600 "block_size": 512, 00:08:05.600 "num_blocks": 512, 00:08:05.600 "name": "malloc1" 00:08:05.600 }, 00:08:05.600 "method": "bdev_malloc_create" 00:08:05.600 }, 00:08:05.600 { 00:08:05.600 "method": "bdev_wait_for_examine" 00:08:05.600 } 00:08:05.600 ] 00:08:05.600 } 00:08:05.600 ] 00:08:05.600 } 00:08:05.600 [2024-11-08 07:35:23.402392] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:05.600 [2024-11-08 07:35:23.403042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61834 ] 00:08:05.600 [2024-11-08 07:35:23.554485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.860 [2024-11-08 07:35:23.604847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.860 [2024-11-08 07:35:23.646509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.860 [2024-11-08 07:35:23.699967] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:05.860 [2024-11-08 07:35:23.700257] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.860 [2024-11-08 07:35:23.794401] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:06.122 00:08:06.122 real 0m0.516s 00:08:06.122 user 0m0.312s 00:08:06.122 sys 0m0.160s 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.122 ************************************ 00:08:06.122 END TEST dd_invalid_input_count 00:08:06.122 ************************************ 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:06.122 ************************************ 00:08:06.122 START TEST dd_invalid_output_count 00:08:06.122 ************************************ 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1127 -- # invalid_output_count 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.122 07:35:23 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:06.122 { 00:08:06.122 "subsystems": [ 00:08:06.122 { 00:08:06.122 "subsystem": "bdev", 00:08:06.122 "config": [ 00:08:06.122 { 00:08:06.122 "params": { 00:08:06.122 "block_size": 512, 00:08:06.122 "num_blocks": 512, 00:08:06.122 "name": "malloc0" 00:08:06.122 }, 00:08:06.122 "method": "bdev_malloc_create" 00:08:06.122 }, 00:08:06.122 { 00:08:06.122 "method": "bdev_wait_for_examine" 00:08:06.122 } 00:08:06.122 ] 00:08:06.122 } 00:08:06.122 ] 00:08:06.122 } 00:08:06.122 [2024-11-08 07:35:23.984478] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:06.122 [2024-11-08 07:35:23.984579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61866 ] 00:08:06.382 [2024-11-08 07:35:24.134421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.382 [2024-11-08 07:35:24.176217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.382 [2024-11-08 07:35:24.217939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.382 [2024-11-08 07:35:24.265128] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:06.383 [2024-11-08 07:35:24.265183] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.642 [2024-11-08 07:35:24.359726] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:06.643 00:08:06.643 real 0m0.496s 00:08:06.643 user 0m0.289s 00:08:06.643 sys 0m0.155s 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.643 ************************************ 00:08:06.643 END TEST dd_invalid_output_count 00:08:06.643 ************************************ 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:06.643 ************************************ 00:08:06.643 START TEST dd_bs_not_multiple 00:08:06.643 ************************************ 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1127 -- # bs_not_multiple 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.643 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:06.643 { 00:08:06.643 "subsystems": [ 00:08:06.643 { 00:08:06.643 "subsystem": "bdev", 00:08:06.643 "config": [ 00:08:06.643 { 00:08:06.643 "params": { 00:08:06.643 "block_size": 512, 00:08:06.643 "num_blocks": 512, 00:08:06.643 "name": "malloc0" 00:08:06.643 }, 00:08:06.643 "method": "bdev_malloc_create" 00:08:06.643 }, 00:08:06.643 { 00:08:06.643 "params": { 00:08:06.643 "block_size": 512, 00:08:06.643 "num_blocks": 512, 00:08:06.643 "name": "malloc1" 00:08:06.643 }, 00:08:06.643 "method": "bdev_malloc_create" 00:08:06.643 }, 00:08:06.643 { 00:08:06.643 "method": "bdev_wait_for_examine" 00:08:06.643 } 00:08:06.643 ] 00:08:06.643 } 00:08:06.643 ] 00:08:06.643 } 00:08:06.643 [2024-11-08 07:35:24.542955] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:06.643 [2024-11-08 07:35:24.543055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61899 ] 00:08:06.903 [2024-11-08 07:35:24.691657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.903 [2024-11-08 07:35:24.736453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.903 [2024-11-08 07:35:24.778280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.903 [2024-11-08 07:35:24.832184] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:06.903 [2024-11-08 07:35:24.832227] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.162 [2024-11-08 07:35:24.927100] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:07.162 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:08:07.162 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.162 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:08:07.162 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:08:07.162 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:08:07.162 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.162 00:08:07.162 real 0m0.509s 00:08:07.162 user 0m0.317s 00:08:07.162 sys 0m0.151s 00:08:07.162 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.162 07:35:24 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:07.162 ************************************ 00:08:07.162 END TEST dd_bs_not_multiple 00:08:07.162 ************************************ 00:08:07.162 00:08:07.162 real 0m6.356s 00:08:07.162 user 0m3.172s 00:08:07.162 sys 0m2.644s 00:08:07.162 07:35:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.162 ************************************ 00:08:07.162 END TEST spdk_dd_negative 00:08:07.162 ************************************ 00:08:07.162 07:35:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:07.162 00:08:07.162 real 1m9.770s 00:08:07.162 user 0m42.137s 00:08:07.162 sys 0m31.627s 00:08:07.162 07:35:25 spdk_dd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.162 ************************************ 00:08:07.162 END TEST spdk_dd 00:08:07.162 ************************************ 00:08:07.162 07:35:25 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:07.162 07:35:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:07.162 07:35:25 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:07.162 07:35:25 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:07.162 07:35:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:07.162 07:35:25 -- common/autotest_common.sh@10 -- # set +x 00:08:07.423 07:35:25 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:07.423 07:35:25 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:07.423 07:35:25 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:07.423 07:35:25 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:07.423 07:35:25 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:07.423 07:35:25 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:07.423 07:35:25 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:07.423 07:35:25 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:07.423 07:35:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.423 07:35:25 -- common/autotest_common.sh@10 -- # set +x 00:08:07.423 ************************************ 00:08:07.423 START TEST nvmf_tcp 00:08:07.423 ************************************ 00:08:07.423 07:35:25 nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:07.423 * Looking for test storage... 00:08:07.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:07.423 07:35:25 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:07.423 07:35:25 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:07.423 07:35:25 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:07.423 07:35:25 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.423 07:35:25 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.683 07:35:25 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.683 07:35:25 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:07.683 07:35:25 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.683 07:35:25 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:07.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.683 --rc genhtml_branch_coverage=1 00:08:07.683 --rc genhtml_function_coverage=1 00:08:07.683 --rc genhtml_legend=1 00:08:07.683 --rc geninfo_all_blocks=1 00:08:07.683 --rc geninfo_unexecuted_blocks=1 00:08:07.683 00:08:07.683 ' 00:08:07.683 07:35:25 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:07.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.683 --rc genhtml_branch_coverage=1 00:08:07.683 --rc genhtml_function_coverage=1 00:08:07.683 --rc genhtml_legend=1 00:08:07.683 --rc geninfo_all_blocks=1 00:08:07.683 --rc geninfo_unexecuted_blocks=1 00:08:07.683 00:08:07.683 ' 00:08:07.683 07:35:25 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:07.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.683 --rc genhtml_branch_coverage=1 00:08:07.683 --rc genhtml_function_coverage=1 00:08:07.683 --rc genhtml_legend=1 00:08:07.683 --rc geninfo_all_blocks=1 00:08:07.683 --rc geninfo_unexecuted_blocks=1 00:08:07.683 00:08:07.683 ' 00:08:07.683 07:35:25 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:07.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.683 --rc genhtml_branch_coverage=1 00:08:07.683 --rc genhtml_function_coverage=1 00:08:07.683 --rc genhtml_legend=1 00:08:07.683 --rc geninfo_all_blocks=1 00:08:07.683 --rc geninfo_unexecuted_blocks=1 00:08:07.683 00:08:07.683 ' 00:08:07.683 07:35:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:07.684 07:35:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:07.684 07:35:25 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:07.684 07:35:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:07.684 07:35:25 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.684 07:35:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:07.684 ************************************ 00:08:07.684 START TEST nvmf_target_core 00:08:07.684 ************************************ 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:07.684 * Looking for test storage... 00:08:07.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:07.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.684 --rc genhtml_branch_coverage=1 00:08:07.684 --rc genhtml_function_coverage=1 00:08:07.684 --rc genhtml_legend=1 00:08:07.684 --rc geninfo_all_blocks=1 00:08:07.684 --rc geninfo_unexecuted_blocks=1 00:08:07.684 00:08:07.684 ' 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:07.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.684 --rc genhtml_branch_coverage=1 00:08:07.684 --rc genhtml_function_coverage=1 00:08:07.684 --rc genhtml_legend=1 00:08:07.684 --rc geninfo_all_blocks=1 00:08:07.684 --rc geninfo_unexecuted_blocks=1 00:08:07.684 00:08:07.684 ' 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:07.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.684 --rc genhtml_branch_coverage=1 00:08:07.684 --rc genhtml_function_coverage=1 00:08:07.684 --rc genhtml_legend=1 00:08:07.684 --rc geninfo_all_blocks=1 00:08:07.684 --rc geninfo_unexecuted_blocks=1 00:08:07.684 00:08:07.684 ' 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:07.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.684 --rc genhtml_branch_coverage=1 00:08:07.684 --rc genhtml_function_coverage=1 00:08:07.684 --rc genhtml_legend=1 00:08:07.684 --rc geninfo_all_blocks=1 00:08:07.684 --rc geninfo_unexecuted_blocks=1 00:08:07.684 00:08:07.684 ' 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.684 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.685 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.685 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.945 ************************************ 00:08:07.945 START TEST nvmf_host_management 00:08:07.945 ************************************ 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:07.945 * Looking for test storage... 00:08:07.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.945 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:07.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.945 --rc genhtml_branch_coverage=1 00:08:07.945 --rc genhtml_function_coverage=1 00:08:07.945 --rc genhtml_legend=1 00:08:07.945 --rc geninfo_all_blocks=1 00:08:07.945 --rc geninfo_unexecuted_blocks=1 00:08:07.945 00:08:07.945 ' 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:07.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.946 --rc genhtml_branch_coverage=1 00:08:07.946 --rc genhtml_function_coverage=1 00:08:07.946 --rc genhtml_legend=1 00:08:07.946 --rc geninfo_all_blocks=1 00:08:07.946 --rc geninfo_unexecuted_blocks=1 00:08:07.946 00:08:07.946 ' 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:07.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.946 --rc genhtml_branch_coverage=1 00:08:07.946 --rc genhtml_function_coverage=1 00:08:07.946 --rc genhtml_legend=1 00:08:07.946 --rc geninfo_all_blocks=1 00:08:07.946 --rc geninfo_unexecuted_blocks=1 00:08:07.946 00:08:07.946 ' 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:07.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.946 --rc genhtml_branch_coverage=1 00:08:07.946 --rc genhtml_function_coverage=1 00:08:07.946 --rc genhtml_legend=1 00:08:07.946 --rc geninfo_all_blocks=1 00:08:07.946 --rc geninfo_unexecuted_blocks=1 00:08:07.946 00:08:07.946 ' 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.946 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:07.946 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:07.947 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:07.947 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:07.947 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:07.947 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.947 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:07.947 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:07.947 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:07.947 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:07.947 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:08.206 Cannot find device "nvmf_init_br" 00:08:08.206 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:08.206 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:08.206 Cannot find device "nvmf_init_br2" 00:08:08.206 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:08.206 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:08.206 Cannot find device "nvmf_tgt_br" 00:08:08.206 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:08.206 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:08.206 Cannot find device "nvmf_tgt_br2" 00:08:08.206 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:08.206 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:08.206 Cannot find device "nvmf_init_br" 00:08:08.206 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:08.206 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:08.206 Cannot find device "nvmf_init_br2" 00:08:08.206 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:08.206 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:08.206 Cannot find device "nvmf_tgt_br" 00:08:08.206 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:08.206 07:35:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:08.206 Cannot find device "nvmf_tgt_br2" 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:08.206 Cannot find device "nvmf_br" 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:08.206 Cannot find device "nvmf_init_if" 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:08.206 Cannot find device "nvmf_init_if2" 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:08.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:08.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:08.206 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:08.466 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:08.466 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:08:08.466 00:08:08.466 --- 10.0.0.3 ping statistics --- 00:08:08.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.466 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:08.466 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:08.466 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:08.466 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:08:08.466 00:08:08.467 --- 10.0.0.4 ping statistics --- 00:08:08.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.467 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:08.467 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:08.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:08:08.467 00:08:08.467 --- 10.0.0.1 ping statistics --- 00:08:08.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.467 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:08:08.467 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:08.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:08:08.467 00:08:08.467 --- 10.0.0.2 ping statistics --- 00:08:08.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.467 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:08.467 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.467 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:08:08.467 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:08.467 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.467 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:08.467 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:08.467 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.467 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:08.467 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:08.726 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:08.726 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:08.726 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:08.726 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:08.727 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:08.727 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.727 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62241 00:08:08.727 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:08.727 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62241 00:08:08.727 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62241 ']' 00:08:08.727 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.727 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:08.727 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.727 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:08.727 07:35:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.727 [2024-11-08 07:35:26.495553] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:08.727 [2024-11-08 07:35:26.495645] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.727 [2024-11-08 07:35:26.656504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.986 [2024-11-08 07:35:26.722897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.986 [2024-11-08 07:35:26.723177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.986 [2024-11-08 07:35:26.723366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.986 [2024-11-08 07:35:26.723507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.986 [2024-11-08 07:35:26.723552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.986 [2024-11-08 07:35:26.724701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.986 [2024-11-08 07:35:26.724788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.986 [2024-11-08 07:35:26.724870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:08.986 [2024-11-08 07:35:26.725014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.986 [2024-11-08 07:35:26.773431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.924 [2024-11-08 07:35:27.579524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.924 Malloc0 00:08:09.924 [2024-11-08 07:35:27.665599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62295 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62295 /var/tmp/bdevperf.sock 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62295 ']' 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:09.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:09.924 { 00:08:09.924 "params": { 00:08:09.924 "name": "Nvme$subsystem", 00:08:09.924 "trtype": "$TEST_TRANSPORT", 00:08:09.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:09.924 "adrfam": "ipv4", 00:08:09.924 "trsvcid": "$NVMF_PORT", 00:08:09.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:09.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:09.924 "hdgst": ${hdgst:-false}, 00:08:09.924 "ddgst": ${ddgst:-false} 00:08:09.924 }, 00:08:09.924 "method": "bdev_nvme_attach_controller" 00:08:09.924 } 00:08:09.924 EOF 00:08:09.924 )") 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:09.924 07:35:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:09.924 "params": { 00:08:09.924 "name": "Nvme0", 00:08:09.924 "trtype": "tcp", 00:08:09.924 "traddr": "10.0.0.3", 00:08:09.924 "adrfam": "ipv4", 00:08:09.924 "trsvcid": "4420", 00:08:09.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:09.924 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:09.924 "hdgst": false, 00:08:09.924 "ddgst": false 00:08:09.924 }, 00:08:09.924 "method": "bdev_nvme_attach_controller" 00:08:09.924 }' 00:08:09.924 [2024-11-08 07:35:27.789681] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:09.924 [2024-11-08 07:35:27.789778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62295 ] 00:08:10.183 [2024-11-08 07:35:27.948239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.183 [2024-11-08 07:35:28.011850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.183 [2024-11-08 07:35:28.062409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.442 Running I/O for 10 seconds... 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1347 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1347 -ge 100 ']' 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.015 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.015 [2024-11-08 07:35:28.870571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.870969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.870990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.871000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.871012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.015 [2024-11-08 07:35:28.871021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.015 [2024-11-08 07:35:28.871032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.016 [2024-11-08 07:35:28.871680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.016 [2024-11-08 07:35:28.871689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.871700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.017 [2024-11-08 07:35:28.871710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.871721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.017 [2024-11-08 07:35:28.871731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.871742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.017 [2024-11-08 07:35:28.871752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.871763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.017 [2024-11-08 07:35:28.871772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.871784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.017 [2024-11-08 07:35:28.871793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.871804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.017 [2024-11-08 07:35:28.871814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.871825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.017 [2024-11-08 07:35:28.871834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.871846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.017 [2024-11-08 07:35:28.871855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.871868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.017 [2024-11-08 07:35:28.871878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.871889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.017 [2024-11-08 07:35:28.871900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.871911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.017 [2024-11-08 07:35:28.871921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.871932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.017 [2024-11-08 07:35:28.871941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.871953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.017 [2024-11-08 07:35:28.871962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.871973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e82d0 is same with the state(6) to be set 00:08:11.017 [2024-11-08 07:35:28.872143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:11.017 [2024-11-08 07:35:28.872166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.872178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:11.017 [2024-11-08 07:35:28.872187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.872198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:11.017 [2024-11-08 07:35:28.872208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.872218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:11.017 [2024-11-08 07:35:28.872228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.017 [2024-11-08 07:35:28.872237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21edce0 is same with the state(6) to be set 00:08:11.017 [2024-11-08 07:35:28.873232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:11.017 task offset: 49152 on job bdev=Nvme0n1 fails 00:08:11.017 00:08:11.017 Latency(us) 00:08:11.017 [2024-11-08T07:35:28.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.017 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:11.017 Job: Nvme0n1 ended in about 0.70 seconds with error 00:08:11.017 Verification LBA range: start 0x0 length 0x400 00:08:11.017 Nvme0n1 : 0.70 2008.65 125.54 91.30 0.00 29889.66 2106.51 28336.52 00:08:11.017 [2024-11-08T07:35:28.978Z] =================================================================================================================== 00:08:11.017 [2024-11-08T07:35:28.978Z] Total : 2008.65 125.54 91.30 0.00 29889.66 2106.51 28336.52 00:08:11.017 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.017 [2024-11-08 07:35:28.875240] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.017 07:35:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:11.017 [2024-11-08 07:35:28.875266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21edce0 (9): Bad file descriptor 00:08:11.017 [2024-11-08 07:35:28.877621] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:11.956 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62295 00:08:11.956 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62295) - No such process 00:08:11.956 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:11.956 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:11.956 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:11.956 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:11.956 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:11.956 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:11.956 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:11.956 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:11.956 { 00:08:11.956 "params": { 00:08:11.956 "name": "Nvme$subsystem", 00:08:11.956 "trtype": "$TEST_TRANSPORT", 00:08:11.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:11.956 "adrfam": "ipv4", 00:08:11.956 "trsvcid": "$NVMF_PORT", 00:08:11.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:11.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:11.956 "hdgst": ${hdgst:-false}, 00:08:11.956 "ddgst": ${ddgst:-false} 00:08:11.956 }, 00:08:11.956 "method": "bdev_nvme_attach_controller" 00:08:11.956 } 00:08:11.956 EOF 00:08:11.956 )") 00:08:11.956 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:11.956 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:11.956 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:11.956 07:35:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:11.956 "params": { 00:08:11.956 "name": "Nvme0", 00:08:11.956 "trtype": "tcp", 00:08:11.956 "traddr": "10.0.0.3", 00:08:11.956 "adrfam": "ipv4", 00:08:11.956 "trsvcid": "4420", 00:08:11.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:11.956 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:11.956 "hdgst": false, 00:08:11.956 "ddgst": false 00:08:11.956 }, 00:08:11.956 "method": "bdev_nvme_attach_controller" 00:08:11.956 }' 00:08:12.216 [2024-11-08 07:35:29.941467] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:12.216 [2024-11-08 07:35:29.941560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62333 ] 00:08:12.216 [2024-11-08 07:35:30.097025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.216 [2024-11-08 07:35:30.149350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.476 [2024-11-08 07:35:30.199579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.476 Running I/O for 1 seconds... 00:08:13.414 2048.00 IOPS, 128.00 MiB/s 00:08:13.414 Latency(us) 00:08:13.414 [2024-11-08T07:35:31.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.414 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:13.414 Verification LBA range: start 0x0 length 0x400 00:08:13.414 Nvme0n1 : 1.02 2070.22 129.39 0.00 0.00 30418.38 3464.05 28211.69 00:08:13.414 [2024-11-08T07:35:31.375Z] =================================================================================================================== 00:08:13.414 [2024-11-08T07:35:31.375Z] Total : 2070.22 129.39 0.00 0.00 30418.38 3464.05 28211.69 00:08:13.673 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:13.673 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:13.673 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:13.673 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:13.673 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:13.673 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:13.673 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:13.673 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:13.673 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:13.673 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.674 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:13.674 rmmod nvme_tcp 00:08:13.674 rmmod nvme_fabrics 00:08:13.674 rmmod nvme_keyring 00:08:13.674 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.674 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:13.674 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:13.674 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62241 ']' 00:08:13.674 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62241 00:08:13.674 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 62241 ']' 00:08:13.674 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 62241 00:08:13.674 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62241 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:13.933 killing process with pid 62241 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62241' 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 62241 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 62241 00:08:13.933 [2024-11-08 07:35:31.821851] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:13.933 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:14.193 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.193 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:14.193 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:14.193 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:14.193 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:14.193 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:14.193 07:35:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:14.193 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:14.193 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.193 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.193 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:14.193 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.193 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.193 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.193 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:14.193 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:14.193 00:08:14.193 real 0m6.457s 00:08:14.193 user 0m23.158s 00:08:14.193 sys 0m1.869s 00:08:14.193 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.193 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.193 ************************************ 00:08:14.193 END TEST nvmf_host_management 00:08:14.193 ************************************ 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.454 ************************************ 00:08:14.454 START TEST nvmf_lvol 00:08:14.454 ************************************ 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:14.454 * Looking for test storage... 00:08:14.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:14.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.454 --rc genhtml_branch_coverage=1 00:08:14.454 --rc genhtml_function_coverage=1 00:08:14.454 --rc genhtml_legend=1 00:08:14.454 --rc geninfo_all_blocks=1 00:08:14.454 --rc geninfo_unexecuted_blocks=1 00:08:14.454 00:08:14.454 ' 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:14.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.454 --rc genhtml_branch_coverage=1 00:08:14.454 --rc genhtml_function_coverage=1 00:08:14.454 --rc genhtml_legend=1 00:08:14.454 --rc geninfo_all_blocks=1 00:08:14.454 --rc geninfo_unexecuted_blocks=1 00:08:14.454 00:08:14.454 ' 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:14.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.454 --rc genhtml_branch_coverage=1 00:08:14.454 --rc genhtml_function_coverage=1 00:08:14.454 --rc genhtml_legend=1 00:08:14.454 --rc geninfo_all_blocks=1 00:08:14.454 --rc geninfo_unexecuted_blocks=1 00:08:14.454 00:08:14.454 ' 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:14.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.454 --rc genhtml_branch_coverage=1 00:08:14.454 --rc genhtml_function_coverage=1 00:08:14.454 --rc genhtml_legend=1 00:08:14.454 --rc geninfo_all_blocks=1 00:08:14.454 --rc geninfo_unexecuted_blocks=1 00:08:14.454 00:08:14.454 ' 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.454 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.455 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.455 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:14.715 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:14.716 Cannot find device "nvmf_init_br" 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:14.716 Cannot find device "nvmf_init_br2" 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:14.716 Cannot find device "nvmf_tgt_br" 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.716 Cannot find device "nvmf_tgt_br2" 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:14.716 Cannot find device "nvmf_init_br" 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:14.716 Cannot find device "nvmf_init_br2" 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:14.716 Cannot find device "nvmf_tgt_br" 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:14.716 Cannot find device "nvmf_tgt_br2" 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:14.716 Cannot find device "nvmf_br" 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:14.716 Cannot find device "nvmf_init_if" 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:14.716 Cannot find device "nvmf_init_if2" 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:14.716 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:14.976 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:14.977 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:14.977 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:08:14.977 00:08:14.977 --- 10.0.0.3 ping statistics --- 00:08:14.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.977 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:14.977 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:14.977 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:08:14.977 00:08:14.977 --- 10.0.0.4 ping statistics --- 00:08:14.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.977 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:14.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:14.977 00:08:14.977 --- 10.0.0.1 ping statistics --- 00:08:14.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.977 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:14.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:08:14.977 00:08:14.977 --- 10.0.0.2 ping statistics --- 00:08:14.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.977 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:14.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62603 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62603 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 62603 ']' 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:14.977 07:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:14.977 [2024-11-08 07:35:32.931528] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:14.977 [2024-11-08 07:35:32.931786] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.237 [2024-11-08 07:35:33.083651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:15.237 [2024-11-08 07:35:33.138961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.237 [2024-11-08 07:35:33.139254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.237 [2024-11-08 07:35:33.139400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.237 [2024-11-08 07:35:33.139477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.237 [2024-11-08 07:35:33.139583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.237 [2024-11-08 07:35:33.140736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.237 [2024-11-08 07:35:33.140826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.237 [2024-11-08 07:35:33.140830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.237 [2024-11-08 07:35:33.189030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.496 07:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:15.496 07:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:08:15.496 07:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.496 07:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:15.496 07:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:15.496 07:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.496 07:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:15.754 [2024-11-08 07:35:33.558267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.754 07:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:16.012 07:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:16.012 07:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:16.271 07:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:16.271 07:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:16.271 07:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:16.839 07:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7c9c4191-855a-43fa-8061-2fa2c2df36d3 00:08:16.839 07:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7c9c4191-855a-43fa-8061-2fa2c2df36d3 lvol 20 00:08:16.839 07:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e1415a06-4415-47e5-b34d-a9d283659ef3 00:08:16.839 07:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:17.097 07:35:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e1415a06-4415-47e5-b34d-a9d283659ef3 00:08:17.356 07:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:17.356 [2024-11-08 07:35:35.295417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:17.356 07:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:17.614 07:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62666 00:08:17.614 07:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:17.614 07:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:18.991 07:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot e1415a06-4415-47e5-b34d-a9d283659ef3 MY_SNAPSHOT 00:08:18.992 07:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1d2ad878-4f1a-489a-b353-36098688d3ef 00:08:18.992 07:35:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize e1415a06-4415-47e5-b34d-a9d283659ef3 30 00:08:19.250 07:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 1d2ad878-4f1a-489a-b353-36098688d3ef MY_CLONE 00:08:19.509 07:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2f35e7b9-0509-4230-8f3e-990cbaeb68af 00:08:19.509 07:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 2f35e7b9-0509-4230-8f3e-990cbaeb68af 00:08:20.125 07:35:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62666 00:08:28.243 Initializing NVMe Controllers 00:08:28.243 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:28.243 Controller IO queue size 128, less than required. 00:08:28.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:28.243 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:28.243 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:28.243 Initialization complete. Launching workers. 00:08:28.243 ======================================================== 00:08:28.243 Latency(us) 00:08:28.243 Device Information : IOPS MiB/s Average min max 00:08:28.243 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12327.30 48.15 10385.85 1994.06 47927.28 00:08:28.243 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12647.40 49.40 10125.81 3346.83 49385.84 00:08:28.243 ======================================================== 00:08:28.243 Total : 24974.70 97.56 10254.16 1994.06 49385.84 00:08:28.243 00:08:28.243 07:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:28.244 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e1415a06-4415-47e5-b34d-a9d283659ef3 00:08:28.502 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7c9c4191-855a-43fa-8061-2fa2c2df36d3 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.761 rmmod nvme_tcp 00:08:28.761 rmmod nvme_fabrics 00:08:28.761 rmmod nvme_keyring 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62603 ']' 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62603 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 62603 ']' 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 62603 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62603 00:08:28.761 killing process with pid 62603 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62603' 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 62603 00:08:28.761 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 62603 00:08:29.020 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:29.020 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:29.020 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:29.020 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:29.020 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:29.020 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:29.020 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:29.020 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:29.020 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:29.020 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:29.020 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:29.020 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:29.020 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:29.279 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:29.279 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:29.279 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:29.279 07:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:29.279 ************************************ 00:08:29.279 END TEST nvmf_lvol 00:08:29.279 ************************************ 00:08:29.279 00:08:29.279 real 0m14.987s 00:08:29.279 user 0m59.885s 00:08:29.279 sys 0m5.875s 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.279 ************************************ 00:08:29.279 START TEST nvmf_lvs_grow 00:08:29.279 ************************************ 00:08:29.279 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:29.539 * Looking for test storage... 00:08:29.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:29.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.539 --rc genhtml_branch_coverage=1 00:08:29.539 --rc genhtml_function_coverage=1 00:08:29.539 --rc genhtml_legend=1 00:08:29.539 --rc geninfo_all_blocks=1 00:08:29.539 --rc geninfo_unexecuted_blocks=1 00:08:29.539 00:08:29.539 ' 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:29.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.539 --rc genhtml_branch_coverage=1 00:08:29.539 --rc genhtml_function_coverage=1 00:08:29.539 --rc genhtml_legend=1 00:08:29.539 --rc geninfo_all_blocks=1 00:08:29.539 --rc geninfo_unexecuted_blocks=1 00:08:29.539 00:08:29.539 ' 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:29.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.539 --rc genhtml_branch_coverage=1 00:08:29.539 --rc genhtml_function_coverage=1 00:08:29.539 --rc genhtml_legend=1 00:08:29.539 --rc geninfo_all_blocks=1 00:08:29.539 --rc geninfo_unexecuted_blocks=1 00:08:29.539 00:08:29.539 ' 00:08:29.539 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:29.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.539 --rc genhtml_branch_coverage=1 00:08:29.540 --rc genhtml_function_coverage=1 00:08:29.540 --rc genhtml_legend=1 00:08:29.540 --rc geninfo_all_blocks=1 00:08:29.540 --rc geninfo_unexecuted_blocks=1 00:08:29.540 00:08:29.540 ' 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.540 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:29.540 Cannot find device "nvmf_init_br" 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:29.540 Cannot find device "nvmf_init_br2" 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:29.540 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:29.540 Cannot find device "nvmf_tgt_br" 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:29.802 Cannot find device "nvmf_tgt_br2" 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:29.802 Cannot find device "nvmf_init_br" 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:29.802 Cannot find device "nvmf_init_br2" 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:29.802 Cannot find device "nvmf_tgt_br" 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:29.802 Cannot find device "nvmf_tgt_br2" 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:29.802 Cannot find device "nvmf_br" 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:29.802 Cannot find device "nvmf_init_if" 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:29.802 Cannot find device "nvmf_init_if2" 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:29.802 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:30.061 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:30.061 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:30.061 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:30.061 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:30.061 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:30.061 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:30.061 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:30.061 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:30.062 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:30.062 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:08:30.062 00:08:30.062 --- 10.0.0.3 ping statistics --- 00:08:30.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.062 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:30.062 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:30.062 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:08:30.062 00:08:30.062 --- 10.0.0.4 ping statistics --- 00:08:30.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.062 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:30.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:30.062 00:08:30.062 --- 10.0.0.1 ping statistics --- 00:08:30.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.062 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:30.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:08:30.062 00:08:30.062 --- 10.0.0.2 ping statistics --- 00:08:30.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.062 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63044 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63044 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 63044 ']' 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:30.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:30.062 07:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.062 [2024-11-08 07:35:47.957347] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:30.062 [2024-11-08 07:35:47.957664] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.321 [2024-11-08 07:35:48.118506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.321 [2024-11-08 07:35:48.179669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.321 [2024-11-08 07:35:48.179733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.321 [2024-11-08 07:35:48.179749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.321 [2024-11-08 07:35:48.179762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.321 [2024-11-08 07:35:48.179773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.321 [2024-11-08 07:35:48.180163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.321 [2024-11-08 07:35:48.228048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.580 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:30.580 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:08:30.580 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:30.580 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:30.580 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.580 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.580 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:30.840 [2024-11-08 07:35:48.607532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.840 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:30.840 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:30.840 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:30.840 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.840 ************************************ 00:08:30.840 START TEST lvs_grow_clean 00:08:30.840 ************************************ 00:08:30.840 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:08:30.840 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:30.840 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:30.840 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:30.840 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:30.840 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:30.840 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:30.840 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:30.840 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:30.840 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.098 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:31.098 07:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:31.356 07:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=514c8871-1206-4d14-a18f-4a76ac0446f6 00:08:31.356 07:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 514c8871-1206-4d14-a18f-4a76ac0446f6 00:08:31.356 07:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:31.615 07:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:31.615 07:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:31.615 07:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 514c8871-1206-4d14-a18f-4a76ac0446f6 lvol 150 00:08:31.615 07:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8bf04101-858b-4c76-aded-1a09e3efe725 00:08:31.615 07:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:31.615 07:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:31.876 [2024-11-08 07:35:49.810598] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:31.876 [2024-11-08 07:35:49.810657] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:31.876 true 00:08:31.876 07:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 514c8871-1206-4d14-a18f-4a76ac0446f6 00:08:31.876 07:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:32.136 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:32.136 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:32.396 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8bf04101-858b-4c76-aded-1a09e3efe725 00:08:32.655 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:32.914 [2024-11-08 07:35:50.751055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:32.914 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:33.173 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63119 00:08:33.173 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:33.173 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:33.173 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63119 /var/tmp/bdevperf.sock 00:08:33.173 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 63119 ']' 00:08:33.173 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.173 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:33.173 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.173 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:33.173 07:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:33.173 [2024-11-08 07:35:51.004203] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:33.173 [2024-11-08 07:35:51.004834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63119 ] 00:08:33.432 [2024-11-08 07:35:51.155185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.432 [2024-11-08 07:35:51.219658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.432 [2024-11-08 07:35:51.270784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.372 07:35:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:34.372 07:35:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:08:34.372 07:35:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:34.372 Nvme0n1 00:08:34.372 07:35:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:34.940 [ 00:08:34.940 { 00:08:34.940 "name": "Nvme0n1", 00:08:34.940 "aliases": [ 00:08:34.940 "8bf04101-858b-4c76-aded-1a09e3efe725" 00:08:34.940 ], 00:08:34.940 "product_name": "NVMe disk", 00:08:34.940 "block_size": 4096, 00:08:34.940 "num_blocks": 38912, 00:08:34.940 "uuid": "8bf04101-858b-4c76-aded-1a09e3efe725", 00:08:34.940 "numa_id": -1, 00:08:34.940 "assigned_rate_limits": { 00:08:34.940 "rw_ios_per_sec": 0, 00:08:34.940 "rw_mbytes_per_sec": 0, 00:08:34.940 "r_mbytes_per_sec": 0, 00:08:34.940 "w_mbytes_per_sec": 0 00:08:34.940 }, 00:08:34.940 "claimed": false, 00:08:34.940 "zoned": false, 00:08:34.940 "supported_io_types": { 00:08:34.940 "read": true, 00:08:34.940 "write": true, 00:08:34.940 "unmap": true, 00:08:34.940 "flush": true, 00:08:34.940 "reset": true, 00:08:34.940 "nvme_admin": true, 00:08:34.940 "nvme_io": true, 00:08:34.940 "nvme_io_md": false, 00:08:34.940 "write_zeroes": true, 00:08:34.940 "zcopy": false, 00:08:34.940 "get_zone_info": false, 00:08:34.940 "zone_management": false, 00:08:34.940 "zone_append": false, 00:08:34.940 "compare": true, 00:08:34.940 "compare_and_write": true, 00:08:34.940 "abort": true, 00:08:34.940 "seek_hole": false, 00:08:34.940 "seek_data": false, 00:08:34.940 "copy": true, 00:08:34.940 "nvme_iov_md": false 00:08:34.940 }, 00:08:34.940 "memory_domains": [ 00:08:34.940 { 00:08:34.940 "dma_device_id": "system", 00:08:34.940 "dma_device_type": 1 00:08:34.940 } 00:08:34.940 ], 00:08:34.940 "driver_specific": { 00:08:34.940 "nvme": [ 00:08:34.940 { 00:08:34.940 "trid": { 00:08:34.940 "trtype": "TCP", 00:08:34.940 "adrfam": "IPv4", 00:08:34.940 "traddr": "10.0.0.3", 00:08:34.940 "trsvcid": "4420", 00:08:34.940 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:34.940 }, 00:08:34.940 "ctrlr_data": { 00:08:34.940 "cntlid": 1, 00:08:34.940 "vendor_id": "0x8086", 00:08:34.940 "model_number": "SPDK bdev Controller", 00:08:34.940 "serial_number": "SPDK0", 00:08:34.940 "firmware_revision": "25.01", 00:08:34.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:34.940 "oacs": { 00:08:34.940 "security": 0, 00:08:34.940 "format": 0, 00:08:34.940 "firmware": 0, 00:08:34.940 "ns_manage": 0 00:08:34.940 }, 00:08:34.940 "multi_ctrlr": true, 00:08:34.940 "ana_reporting": false 00:08:34.940 }, 00:08:34.940 "vs": { 00:08:34.940 "nvme_version": "1.3" 00:08:34.940 }, 00:08:34.940 "ns_data": { 00:08:34.940 "id": 1, 00:08:34.940 "can_share": true 00:08:34.940 } 00:08:34.940 } 00:08:34.940 ], 00:08:34.940 "mp_policy": "active_passive" 00:08:34.940 } 00:08:34.940 } 00:08:34.940 ] 00:08:34.940 07:35:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63142 00:08:34.940 07:35:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:34.940 07:35:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:34.940 Running I/O for 10 seconds... 00:08:35.878 Latency(us) 00:08:35.878 [2024-11-08T07:35:53.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.878 Nvme0n1 : 1.00 8615.00 33.65 0.00 0.00 0.00 0.00 0.00 00:08:35.878 [2024-11-08T07:35:53.839Z] =================================================================================================================== 00:08:35.878 [2024-11-08T07:35:53.839Z] Total : 8615.00 33.65 0.00 0.00 0.00 0.00 0.00 00:08:35.878 00:08:36.815 07:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 514c8871-1206-4d14-a18f-4a76ac0446f6 00:08:36.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.815 Nvme0n1 : 2.00 8814.00 34.43 0.00 0.00 0.00 0.00 0.00 00:08:36.816 [2024-11-08T07:35:54.777Z] =================================================================================================================== 00:08:36.816 [2024-11-08T07:35:54.777Z] Total : 8814.00 34.43 0.00 0.00 0.00 0.00 0.00 00:08:36.816 00:08:37.075 true 00:08:37.075 07:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 514c8871-1206-4d14-a18f-4a76ac0446f6 00:08:37.075 07:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:37.334 07:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:37.334 07:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:37.334 07:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63142 00:08:37.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.902 Nvme0n1 : 3.00 8797.00 34.36 0.00 0.00 0.00 0.00 0.00 00:08:37.902 [2024-11-08T07:35:55.863Z] =================================================================================================================== 00:08:37.902 [2024-11-08T07:35:55.863Z] Total : 8797.00 34.36 0.00 0.00 0.00 0.00 0.00 00:08:37.902 00:08:38.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.840 Nvme0n1 : 4.00 8683.25 33.92 0.00 0.00 0.00 0.00 0.00 00:08:38.840 [2024-11-08T07:35:56.801Z] =================================================================================================================== 00:08:38.840 [2024-11-08T07:35:56.801Z] Total : 8683.25 33.92 0.00 0.00 0.00 0.00 0.00 00:08:38.840 00:08:40.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.291 Nvme0n1 : 5.00 8673.80 33.88 0.00 0.00 0.00 0.00 0.00 00:08:40.291 [2024-11-08T07:35:58.252Z] =================================================================================================================== 00:08:40.291 [2024-11-08T07:35:58.252Z] Total : 8673.80 33.88 0.00 0.00 0.00 0.00 0.00 00:08:40.291 00:08:40.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.859 Nvme0n1 : 6.00 8667.50 33.86 0.00 0.00 0.00 0.00 0.00 00:08:40.859 [2024-11-08T07:35:58.820Z] =================================================================================================================== 00:08:40.859 [2024-11-08T07:35:58.820Z] Total : 8667.50 33.86 0.00 0.00 0.00 0.00 0.00 00:08:40.859 00:08:41.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.797 Nvme0n1 : 7.00 8626.71 33.70 0.00 0.00 0.00 0.00 0.00 00:08:41.797 [2024-11-08T07:35:59.758Z] =================================================================================================================== 00:08:41.797 [2024-11-08T07:35:59.758Z] Total : 8626.71 33.70 0.00 0.00 0.00 0.00 0.00 00:08:41.797 00:08:43.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.173 Nvme0n1 : 8.00 8635.88 33.73 0.00 0.00 0.00 0.00 0.00 00:08:43.173 [2024-11-08T07:36:01.134Z] =================================================================================================================== 00:08:43.173 [2024-11-08T07:36:01.134Z] Total : 8635.88 33.73 0.00 0.00 0.00 0.00 0.00 00:08:43.173 00:08:44.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.111 Nvme0n1 : 9.00 8641.00 33.75 0.00 0.00 0.00 0.00 0.00 00:08:44.111 [2024-11-08T07:36:02.072Z] =================================================================================================================== 00:08:44.111 [2024-11-08T07:36:02.072Z] Total : 8641.00 33.75 0.00 0.00 0.00 0.00 0.00 00:08:44.111 00:08:45.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.047 Nvme0n1 : 10.00 8625.40 33.69 0.00 0.00 0.00 0.00 0.00 00:08:45.047 [2024-11-08T07:36:03.008Z] =================================================================================================================== 00:08:45.047 [2024-11-08T07:36:03.008Z] Total : 8625.40 33.69 0.00 0.00 0.00 0.00 0.00 00:08:45.047 00:08:45.047 00:08:45.047 Latency(us) 00:08:45.047 [2024-11-08T07:36:03.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.047 Nvme0n1 : 10.00 8634.98 33.73 0.00 0.00 14819.51 4213.03 94871.16 00:08:45.047 [2024-11-08T07:36:03.008Z] =================================================================================================================== 00:08:45.047 [2024-11-08T07:36:03.008Z] Total : 8634.98 33.73 0.00 0.00 14819.51 4213.03 94871.16 00:08:45.047 { 00:08:45.047 "results": [ 00:08:45.047 { 00:08:45.047 "job": "Nvme0n1", 00:08:45.047 "core_mask": "0x2", 00:08:45.047 "workload": "randwrite", 00:08:45.047 "status": "finished", 00:08:45.047 "queue_depth": 128, 00:08:45.047 "io_size": 4096, 00:08:45.047 "runtime": 10.003732, 00:08:45.047 "iops": 8634.977426424459, 00:08:45.047 "mibps": 33.73038057197054, 00:08:45.047 "io_failed": 0, 00:08:45.047 "io_timeout": 0, 00:08:45.047 "avg_latency_us": 14819.508104400056, 00:08:45.047 "min_latency_us": 4213.028571428571, 00:08:45.047 "max_latency_us": 94871.1619047619 00:08:45.047 } 00:08:45.047 ], 00:08:45.047 "core_count": 1 00:08:45.047 } 00:08:45.047 07:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63119 00:08:45.047 07:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 63119 ']' 00:08:45.047 07:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 63119 00:08:45.047 07:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:08:45.047 07:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:45.047 07:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63119 00:08:45.047 killing process with pid 63119 00:08:45.047 Received shutdown signal, test time was about 10.000000 seconds 00:08:45.047 00:08:45.047 Latency(us) 00:08:45.047 [2024-11-08T07:36:03.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.047 [2024-11-08T07:36:03.008Z] =================================================================================================================== 00:08:45.047 [2024-11-08T07:36:03.008Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:45.047 07:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:45.047 07:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:45.047 07:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63119' 00:08:45.047 07:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 63119 00:08:45.047 07:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 63119 00:08:45.047 07:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:45.306 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.565 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 514c8871-1206-4d14-a18f-4a76ac0446f6 00:08:45.565 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:45.823 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:45.823 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:45.823 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.082 [2024-11-08 07:36:03.900653] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:46.082 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 514c8871-1206-4d14-a18f-4a76ac0446f6 00:08:46.082 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:46.082 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 514c8871-1206-4d14-a18f-4a76ac0446f6 00:08:46.082 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.082 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.082 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.082 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.082 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.082 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.082 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.082 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:46.082 07:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 514c8871-1206-4d14-a18f-4a76ac0446f6 00:08:46.379 request: 00:08:46.379 { 00:08:46.379 "uuid": "514c8871-1206-4d14-a18f-4a76ac0446f6", 00:08:46.379 "method": "bdev_lvol_get_lvstores", 00:08:46.379 "req_id": 1 00:08:46.379 } 00:08:46.379 Got JSON-RPC error response 00:08:46.379 response: 00:08:46.379 { 00:08:46.379 "code": -19, 00:08:46.379 "message": "No such device" 00:08:46.379 } 00:08:46.379 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:46.379 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:46.379 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:46.379 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:46.379 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:46.638 aio_bdev 00:08:46.638 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8bf04101-858b-4c76-aded-1a09e3efe725 00:08:46.638 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=8bf04101-858b-4c76-aded-1a09e3efe725 00:08:46.638 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:46.638 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:08:46.638 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:46.638 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:46.638 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:46.897 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8bf04101-858b-4c76-aded-1a09e3efe725 -t 2000 00:08:46.897 [ 00:08:46.897 { 00:08:46.897 "name": "8bf04101-858b-4c76-aded-1a09e3efe725", 00:08:46.897 "aliases": [ 00:08:46.897 "lvs/lvol" 00:08:46.897 ], 00:08:46.897 "product_name": "Logical Volume", 00:08:46.897 "block_size": 4096, 00:08:46.897 "num_blocks": 38912, 00:08:46.897 "uuid": "8bf04101-858b-4c76-aded-1a09e3efe725", 00:08:46.897 "assigned_rate_limits": { 00:08:46.897 "rw_ios_per_sec": 0, 00:08:46.897 "rw_mbytes_per_sec": 0, 00:08:46.897 "r_mbytes_per_sec": 0, 00:08:46.897 "w_mbytes_per_sec": 0 00:08:46.897 }, 00:08:46.897 "claimed": false, 00:08:46.897 "zoned": false, 00:08:46.897 "supported_io_types": { 00:08:46.897 "read": true, 00:08:46.897 "write": true, 00:08:46.897 "unmap": true, 00:08:46.897 "flush": false, 00:08:46.897 "reset": true, 00:08:46.897 "nvme_admin": false, 00:08:46.897 "nvme_io": false, 00:08:46.897 "nvme_io_md": false, 00:08:46.897 "write_zeroes": true, 00:08:46.897 "zcopy": false, 00:08:46.897 "get_zone_info": false, 00:08:46.897 "zone_management": false, 00:08:46.897 "zone_append": false, 00:08:46.897 "compare": false, 00:08:46.897 "compare_and_write": false, 00:08:46.897 "abort": false, 00:08:46.897 "seek_hole": true, 00:08:46.897 "seek_data": true, 00:08:46.897 "copy": false, 00:08:46.897 "nvme_iov_md": false 00:08:46.897 }, 00:08:46.897 "driver_specific": { 00:08:46.897 "lvol": { 00:08:46.897 "lvol_store_uuid": "514c8871-1206-4d14-a18f-4a76ac0446f6", 00:08:46.897 "base_bdev": "aio_bdev", 00:08:46.897 "thin_provision": false, 00:08:46.897 "num_allocated_clusters": 38, 00:08:46.897 "snapshot": false, 00:08:46.897 "clone": false, 00:08:46.897 "esnap_clone": false 00:08:46.897 } 00:08:46.897 } 00:08:46.897 } 00:08:46.897 ] 00:08:46.897 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:08:46.897 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 514c8871-1206-4d14-a18f-4a76ac0446f6 00:08:46.897 07:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:47.156 07:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:47.156 07:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 514c8871-1206-4d14-a18f-4a76ac0446f6 00:08:47.156 07:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:47.415 07:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:47.415 07:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8bf04101-858b-4c76-aded-1a09e3efe725 00:08:47.674 07:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 514c8871-1206-4d14-a18f-4a76ac0446f6 00:08:47.933 07:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.192 07:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:48.760 ************************************ 00:08:48.760 END TEST lvs_grow_clean 00:08:48.760 ************************************ 00:08:48.760 00:08:48.760 real 0m17.776s 00:08:48.760 user 0m16.106s 00:08:48.760 sys 0m3.252s 00:08:48.760 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:48.760 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:48.760 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:48.760 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:48.761 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:48.761 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.761 ************************************ 00:08:48.761 START TEST lvs_grow_dirty 00:08:48.761 ************************************ 00:08:48.761 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:08:48.761 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:48.761 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:48.761 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:48.761 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:48.761 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:48.761 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:48.761 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:48.761 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:48.761 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:49.020 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:49.020 07:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:49.279 07:36:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=64807b73-0a6b-4186-8169-db0caead428b 00:08:49.279 07:36:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:49.279 07:36:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64807b73-0a6b-4186-8169-db0caead428b 00:08:49.538 07:36:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:49.538 07:36:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:49.538 07:36:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 64807b73-0a6b-4186-8169-db0caead428b lvol 150 00:08:49.538 07:36:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=26a1bb84-21a9-4462-b1ff-0faa058aed4c 00:08:49.538 07:36:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.538 07:36:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:49.796 [2024-11-08 07:36:07.659630] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:49.796 [2024-11-08 07:36:07.659725] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:49.796 true 00:08:49.796 07:36:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:49.796 07:36:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64807b73-0a6b-4186-8169-db0caead428b 00:08:50.055 07:36:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:50.055 07:36:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:50.313 07:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 26a1bb84-21a9-4462-b1ff-0faa058aed4c 00:08:50.572 07:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:50.830 [2024-11-08 07:36:08.760130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:50.830 07:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:51.089 07:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:51.089 07:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63385 00:08:51.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:51.089 07:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:51.089 07:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63385 /var/tmp/bdevperf.sock 00:08:51.089 07:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63385 ']' 00:08:51.089 07:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:51.089 07:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:51.089 07:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:51.089 07:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:51.089 07:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:51.089 [2024-11-08 07:36:09.002905] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:08:51.089 [2024-11-08 07:36:09.003202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63385 ] 00:08:51.348 [2024-11-08 07:36:09.153475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.348 [2024-11-08 07:36:09.216672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.348 [2024-11-08 07:36:09.264598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.282 07:36:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:52.282 07:36:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:52.282 07:36:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:52.282 Nvme0n1 00:08:52.282 07:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:52.542 [ 00:08:52.542 { 00:08:52.542 "name": "Nvme0n1", 00:08:52.542 "aliases": [ 00:08:52.542 "26a1bb84-21a9-4462-b1ff-0faa058aed4c" 00:08:52.542 ], 00:08:52.542 "product_name": "NVMe disk", 00:08:52.542 "block_size": 4096, 00:08:52.542 "num_blocks": 38912, 00:08:52.542 "uuid": "26a1bb84-21a9-4462-b1ff-0faa058aed4c", 00:08:52.542 "numa_id": -1, 00:08:52.542 "assigned_rate_limits": { 00:08:52.542 "rw_ios_per_sec": 0, 00:08:52.542 "rw_mbytes_per_sec": 0, 00:08:52.542 "r_mbytes_per_sec": 0, 00:08:52.542 "w_mbytes_per_sec": 0 00:08:52.542 }, 00:08:52.542 "claimed": false, 00:08:52.542 "zoned": false, 00:08:52.542 "supported_io_types": { 00:08:52.542 "read": true, 00:08:52.542 "write": true, 00:08:52.542 "unmap": true, 00:08:52.542 "flush": true, 00:08:52.542 "reset": true, 00:08:52.542 "nvme_admin": true, 00:08:52.542 "nvme_io": true, 00:08:52.542 "nvme_io_md": false, 00:08:52.542 "write_zeroes": true, 00:08:52.542 "zcopy": false, 00:08:52.542 "get_zone_info": false, 00:08:52.542 "zone_management": false, 00:08:52.542 "zone_append": false, 00:08:52.542 "compare": true, 00:08:52.542 "compare_and_write": true, 00:08:52.542 "abort": true, 00:08:52.542 "seek_hole": false, 00:08:52.542 "seek_data": false, 00:08:52.542 "copy": true, 00:08:52.542 "nvme_iov_md": false 00:08:52.542 }, 00:08:52.542 "memory_domains": [ 00:08:52.542 { 00:08:52.542 "dma_device_id": "system", 00:08:52.542 "dma_device_type": 1 00:08:52.542 } 00:08:52.542 ], 00:08:52.542 "driver_specific": { 00:08:52.542 "nvme": [ 00:08:52.542 { 00:08:52.542 "trid": { 00:08:52.542 "trtype": "TCP", 00:08:52.542 "adrfam": "IPv4", 00:08:52.542 "traddr": "10.0.0.3", 00:08:52.542 "trsvcid": "4420", 00:08:52.542 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:52.542 }, 00:08:52.542 "ctrlr_data": { 00:08:52.542 "cntlid": 1, 00:08:52.542 "vendor_id": "0x8086", 00:08:52.542 "model_number": "SPDK bdev Controller", 00:08:52.542 "serial_number": "SPDK0", 00:08:52.542 "firmware_revision": "25.01", 00:08:52.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:52.542 "oacs": { 00:08:52.542 "security": 0, 00:08:52.542 "format": 0, 00:08:52.542 "firmware": 0, 00:08:52.542 "ns_manage": 0 00:08:52.542 }, 00:08:52.542 "multi_ctrlr": true, 00:08:52.542 "ana_reporting": false 00:08:52.542 }, 00:08:52.542 "vs": { 00:08:52.542 "nvme_version": "1.3" 00:08:52.542 }, 00:08:52.542 "ns_data": { 00:08:52.542 "id": 1, 00:08:52.542 "can_share": true 00:08:52.542 } 00:08:52.542 } 00:08:52.542 ], 00:08:52.542 "mp_policy": "active_passive" 00:08:52.542 } 00:08:52.542 } 00:08:52.542 ] 00:08:52.542 07:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:52.542 07:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63409 00:08:52.542 07:36:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:52.803 Running I/O for 10 seconds... 00:08:53.741 Latency(us) 00:08:53.741 [2024-11-08T07:36:11.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.741 Nvme0n1 : 1.00 9525.00 37.21 0.00 0.00 0.00 0.00 0.00 00:08:53.741 [2024-11-08T07:36:11.702Z] =================================================================================================================== 00:08:53.741 [2024-11-08T07:36:11.702Z] Total : 9525.00 37.21 0.00 0.00 0.00 0.00 0.00 00:08:53.741 00:08:54.678 07:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 64807b73-0a6b-4186-8169-db0caead428b 00:08:54.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.678 Nvme0n1 : 2.00 9425.00 36.82 0.00 0.00 0.00 0.00 0.00 00:08:54.678 [2024-11-08T07:36:12.639Z] =================================================================================================================== 00:08:54.678 [2024-11-08T07:36:12.639Z] Total : 9425.00 36.82 0.00 0.00 0.00 0.00 0.00 00:08:54.678 00:08:54.678 true 00:08:54.938 07:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64807b73-0a6b-4186-8169-db0caead428b 00:08:54.938 07:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:55.196 07:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:55.196 07:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:55.196 07:36:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63409 00:08:55.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.765 Nvme0n1 : 3.00 9289.00 36.29 0.00 0.00 0.00 0.00 0.00 00:08:55.765 [2024-11-08T07:36:13.726Z] =================================================================================================================== 00:08:55.765 [2024-11-08T07:36:13.726Z] Total : 9289.00 36.29 0.00 0.00 0.00 0.00 0.00 00:08:55.765 00:08:56.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.708 Nvme0n1 : 4.00 9221.00 36.02 0.00 0.00 0.00 0.00 0.00 00:08:56.708 [2024-11-08T07:36:14.669Z] =================================================================================================================== 00:08:56.708 [2024-11-08T07:36:14.669Z] Total : 9221.00 36.02 0.00 0.00 0.00 0.00 0.00 00:08:56.708 00:08:57.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.646 Nvme0n1 : 5.00 9154.80 35.76 0.00 0.00 0.00 0.00 0.00 00:08:57.646 [2024-11-08T07:36:15.607Z] =================================================================================================================== 00:08:57.646 [2024-11-08T07:36:15.607Z] Total : 9154.80 35.76 0.00 0.00 0.00 0.00 0.00 00:08:57.646 00:08:58.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.584 Nvme0n1 : 6.00 9089.50 35.51 0.00 0.00 0.00 0.00 0.00 00:08:58.584 [2024-11-08T07:36:16.545Z] =================================================================================================================== 00:08:58.585 [2024-11-08T07:36:16.546Z] Total : 9089.50 35.51 0.00 0.00 0.00 0.00 0.00 00:08:58.585 00:08:59.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.966 Nvme0n1 : 7.00 8712.71 34.03 0.00 0.00 0.00 0.00 0.00 00:08:59.966 [2024-11-08T07:36:17.927Z] =================================================================================================================== 00:08:59.966 [2024-11-08T07:36:17.927Z] Total : 8712.71 34.03 0.00 0.00 0.00 0.00 0.00 00:08:59.966 00:09:00.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.907 Nvme0n1 : 8.00 8719.00 34.06 0.00 0.00 0.00 0.00 0.00 00:09:00.907 [2024-11-08T07:36:18.868Z] =================================================================================================================== 00:09:00.907 [2024-11-08T07:36:18.868Z] Total : 8719.00 34.06 0.00 0.00 0.00 0.00 0.00 00:09:00.907 00:09:01.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.843 Nvme0n1 : 9.00 8737.56 34.13 0.00 0.00 0.00 0.00 0.00 00:09:01.843 [2024-11-08T07:36:19.804Z] =================================================================================================================== 00:09:01.843 [2024-11-08T07:36:19.804Z] Total : 8737.56 34.13 0.00 0.00 0.00 0.00 0.00 00:09:01.843 00:09:02.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.779 Nvme0n1 : 10.00 8740.10 34.14 0.00 0.00 0.00 0.00 0.00 00:09:02.779 [2024-11-08T07:36:20.740Z] =================================================================================================================== 00:09:02.779 [2024-11-08T07:36:20.741Z] Total : 8740.10 34.14 0.00 0.00 0.00 0.00 0.00 00:09:02.780 00:09:02.780 00:09:02.780 Latency(us) 00:09:02.780 [2024-11-08T07:36:20.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.780 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.780 Nvme0n1 : 10.01 8747.34 34.17 0.00 0.00 14628.91 8675.72 301590.43 00:09:02.780 [2024-11-08T07:36:20.741Z] =================================================================================================================== 00:09:02.780 [2024-11-08T07:36:20.741Z] Total : 8747.34 34.17 0.00 0.00 14628.91 8675.72 301590.43 00:09:02.780 { 00:09:02.780 "results": [ 00:09:02.780 { 00:09:02.780 "job": "Nvme0n1", 00:09:02.780 "core_mask": "0x2", 00:09:02.780 "workload": "randwrite", 00:09:02.780 "status": "finished", 00:09:02.780 "queue_depth": 128, 00:09:02.780 "io_size": 4096, 00:09:02.780 "runtime": 10.006353, 00:09:02.780 "iops": 8747.34281311083, 00:09:02.780 "mibps": 34.16930786371418, 00:09:02.780 "io_failed": 0, 00:09:02.780 "io_timeout": 0, 00:09:02.780 "avg_latency_us": 14628.905596001106, 00:09:02.780 "min_latency_us": 8675.718095238095, 00:09:02.780 "max_latency_us": 301590.43047619046 00:09:02.780 } 00:09:02.780 ], 00:09:02.780 "core_count": 1 00:09:02.780 } 00:09:02.780 07:36:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63385 00:09:02.780 07:36:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 63385 ']' 00:09:02.780 07:36:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 63385 00:09:02.780 07:36:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:09:02.780 07:36:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:02.780 07:36:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63385 00:09:02.780 killing process with pid 63385 00:09:02.780 Received shutdown signal, test time was about 10.000000 seconds 00:09:02.780 00:09:02.780 Latency(us) 00:09:02.780 [2024-11-08T07:36:20.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.780 [2024-11-08T07:36:20.741Z] =================================================================================================================== 00:09:02.780 [2024-11-08T07:36:20.741Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:02.780 07:36:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:02.780 07:36:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:02.780 07:36:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63385' 00:09:02.780 07:36:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 63385 00:09:02.780 07:36:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 63385 00:09:03.039 07:36:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:03.297 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:03.298 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64807b73-0a6b-4186-8169-db0caead428b 00:09:03.298 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63044 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63044 00:09:03.557 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63044 Killed "${NVMF_APP[@]}" "$@" 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63541 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63541 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63541 ']' 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:03.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:03.557 07:36:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:03.815 [2024-11-08 07:36:21.553798] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:09:03.815 [2024-11-08 07:36:21.554076] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.815 [2024-11-08 07:36:21.697226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.815 [2024-11-08 07:36:21.747929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.815 [2024-11-08 07:36:21.747991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.815 [2024-11-08 07:36:21.748001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.815 [2024-11-08 07:36:21.748010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.815 [2024-11-08 07:36:21.748017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.815 [2024-11-08 07:36:21.748299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.073 [2024-11-08 07:36:21.789478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.640 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:04.640 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:09:04.640 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:04.640 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:04.640 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:04.640 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.640 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:04.899 [2024-11-08 07:36:22.632283] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:04.899 [2024-11-08 07:36:22.633069] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:04.899 [2024-11-08 07:36:22.633213] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:04.899 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:04.899 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 26a1bb84-21a9-4462-b1ff-0faa058aed4c 00:09:04.899 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=26a1bb84-21a9-4462-b1ff-0faa058aed4c 00:09:04.899 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:04.899 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:09:04.899 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:04.899 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:04.899 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:05.158 07:36:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 26a1bb84-21a9-4462-b1ff-0faa058aed4c -t 2000 00:09:05.416 [ 00:09:05.416 { 00:09:05.416 "name": "26a1bb84-21a9-4462-b1ff-0faa058aed4c", 00:09:05.416 "aliases": [ 00:09:05.416 "lvs/lvol" 00:09:05.416 ], 00:09:05.416 "product_name": "Logical Volume", 00:09:05.416 "block_size": 4096, 00:09:05.416 "num_blocks": 38912, 00:09:05.416 "uuid": "26a1bb84-21a9-4462-b1ff-0faa058aed4c", 00:09:05.416 "assigned_rate_limits": { 00:09:05.416 "rw_ios_per_sec": 0, 00:09:05.416 "rw_mbytes_per_sec": 0, 00:09:05.416 "r_mbytes_per_sec": 0, 00:09:05.416 "w_mbytes_per_sec": 0 00:09:05.416 }, 00:09:05.416 "claimed": false, 00:09:05.416 "zoned": false, 00:09:05.416 "supported_io_types": { 00:09:05.416 "read": true, 00:09:05.416 "write": true, 00:09:05.416 "unmap": true, 00:09:05.416 "flush": false, 00:09:05.416 "reset": true, 00:09:05.416 "nvme_admin": false, 00:09:05.416 "nvme_io": false, 00:09:05.416 "nvme_io_md": false, 00:09:05.416 "write_zeroes": true, 00:09:05.416 "zcopy": false, 00:09:05.416 "get_zone_info": false, 00:09:05.416 "zone_management": false, 00:09:05.416 "zone_append": false, 00:09:05.416 "compare": false, 00:09:05.416 "compare_and_write": false, 00:09:05.416 "abort": false, 00:09:05.416 "seek_hole": true, 00:09:05.416 "seek_data": true, 00:09:05.416 "copy": false, 00:09:05.416 "nvme_iov_md": false 00:09:05.416 }, 00:09:05.416 "driver_specific": { 00:09:05.416 "lvol": { 00:09:05.416 "lvol_store_uuid": "64807b73-0a6b-4186-8169-db0caead428b", 00:09:05.416 "base_bdev": "aio_bdev", 00:09:05.416 "thin_provision": false, 00:09:05.416 "num_allocated_clusters": 38, 00:09:05.416 "snapshot": false, 00:09:05.416 "clone": false, 00:09:05.416 "esnap_clone": false 00:09:05.416 } 00:09:05.416 } 00:09:05.416 } 00:09:05.416 ] 00:09:05.416 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:09:05.416 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64807b73-0a6b-4186-8169-db0caead428b 00:09:05.416 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:05.675 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:05.675 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64807b73-0a6b-4186-8169-db0caead428b 00:09:05.675 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:05.675 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:05.675 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:05.935 [2024-11-08 07:36:23.794037] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:05.935 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64807b73-0a6b-4186-8169-db0caead428b 00:09:05.935 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:05.935 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64807b73-0a6b-4186-8169-db0caead428b 00:09:05.935 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.935 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.935 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.935 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.935 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.935 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.935 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.935 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:05.935 07:36:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64807b73-0a6b-4186-8169-db0caead428b 00:09:06.193 request: 00:09:06.193 { 00:09:06.193 "uuid": "64807b73-0a6b-4186-8169-db0caead428b", 00:09:06.193 "method": "bdev_lvol_get_lvstores", 00:09:06.193 "req_id": 1 00:09:06.193 } 00:09:06.193 Got JSON-RPC error response 00:09:06.193 response: 00:09:06.193 { 00:09:06.193 "code": -19, 00:09:06.193 "message": "No such device" 00:09:06.193 } 00:09:06.193 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:06.193 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:06.193 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:06.193 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:06.193 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:06.451 aio_bdev 00:09:06.451 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 26a1bb84-21a9-4462-b1ff-0faa058aed4c 00:09:06.452 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=26a1bb84-21a9-4462-b1ff-0faa058aed4c 00:09:06.452 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:06.452 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:09:06.452 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:06.452 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:06.452 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:06.710 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 26a1bb84-21a9-4462-b1ff-0faa058aed4c -t 2000 00:09:06.969 [ 00:09:06.969 { 00:09:06.969 "name": "26a1bb84-21a9-4462-b1ff-0faa058aed4c", 00:09:06.969 "aliases": [ 00:09:06.969 "lvs/lvol" 00:09:06.969 ], 00:09:06.969 "product_name": "Logical Volume", 00:09:06.969 "block_size": 4096, 00:09:06.969 "num_blocks": 38912, 00:09:06.969 "uuid": "26a1bb84-21a9-4462-b1ff-0faa058aed4c", 00:09:06.969 "assigned_rate_limits": { 00:09:06.969 "rw_ios_per_sec": 0, 00:09:06.969 "rw_mbytes_per_sec": 0, 00:09:06.969 "r_mbytes_per_sec": 0, 00:09:06.969 "w_mbytes_per_sec": 0 00:09:06.969 }, 00:09:06.969 "claimed": false, 00:09:06.969 "zoned": false, 00:09:06.969 "supported_io_types": { 00:09:06.969 "read": true, 00:09:06.969 "write": true, 00:09:06.969 "unmap": true, 00:09:06.969 "flush": false, 00:09:06.969 "reset": true, 00:09:06.969 "nvme_admin": false, 00:09:06.969 "nvme_io": false, 00:09:06.969 "nvme_io_md": false, 00:09:06.970 "write_zeroes": true, 00:09:06.970 "zcopy": false, 00:09:06.970 "get_zone_info": false, 00:09:06.970 "zone_management": false, 00:09:06.970 "zone_append": false, 00:09:06.970 "compare": false, 00:09:06.970 "compare_and_write": false, 00:09:06.970 "abort": false, 00:09:06.970 "seek_hole": true, 00:09:06.970 "seek_data": true, 00:09:06.970 "copy": false, 00:09:06.970 "nvme_iov_md": false 00:09:06.970 }, 00:09:06.970 "driver_specific": { 00:09:06.970 "lvol": { 00:09:06.970 "lvol_store_uuid": "64807b73-0a6b-4186-8169-db0caead428b", 00:09:06.970 "base_bdev": "aio_bdev", 00:09:06.970 "thin_provision": false, 00:09:06.970 "num_allocated_clusters": 38, 00:09:06.970 "snapshot": false, 00:09:06.970 "clone": false, 00:09:06.970 "esnap_clone": false 00:09:06.970 } 00:09:06.970 } 00:09:06.970 } 00:09:06.970 ] 00:09:06.970 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:09:06.970 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64807b73-0a6b-4186-8169-db0caead428b 00:09:06.970 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:07.228 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:07.228 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:07.228 07:36:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64807b73-0a6b-4186-8169-db0caead428b 00:09:07.487 07:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:07.487 07:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 26a1bb84-21a9-4462-b1ff-0faa058aed4c 00:09:07.746 07:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 64807b73-0a6b-4186-8169-db0caead428b 00:09:07.746 07:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:08.006 07:36:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:08.572 ************************************ 00:09:08.572 END TEST lvs_grow_dirty 00:09:08.572 ************************************ 00:09:08.572 00:09:08.572 real 0m19.889s 00:09:08.572 user 0m39.028s 00:09:08.572 sys 0m8.500s 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:08.572 nvmf_trace.0 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.572 07:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:09.138 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.138 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:09.138 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.138 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.138 rmmod nvme_tcp 00:09:09.138 rmmod nvme_fabrics 00:09:09.138 rmmod nvme_keyring 00:09:09.138 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.138 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:09.138 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:09.138 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63541 ']' 00:09:09.138 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63541 00:09:09.138 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 63541 ']' 00:09:09.138 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 63541 00:09:09.138 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:09:09.138 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:09.138 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63541 00:09:09.395 killing process with pid 63541 00:09:09.395 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:09.395 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:09.395 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63541' 00:09:09.395 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 63541 00:09:09.395 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 63541 00:09:09.395 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.395 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.395 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.395 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:09.395 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:09.395 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.395 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.395 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.395 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:09.396 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:09.396 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:09.396 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:09.396 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.396 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:09.396 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:09.396 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:09.396 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:09.654 ************************************ 00:09:09.654 END TEST nvmf_lvs_grow 00:09:09.654 ************************************ 00:09:09.654 00:09:09.654 real 0m40.294s 00:09:09.654 user 1m1.497s 00:09:09.654 sys 0m12.950s 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.654 ************************************ 00:09:09.654 START TEST nvmf_bdev_io_wait 00:09:09.654 ************************************ 00:09:09.654 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:09.913 * Looking for test storage... 00:09:09.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:09.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.913 --rc genhtml_branch_coverage=1 00:09:09.913 --rc genhtml_function_coverage=1 00:09:09.913 --rc genhtml_legend=1 00:09:09.913 --rc geninfo_all_blocks=1 00:09:09.913 --rc geninfo_unexecuted_blocks=1 00:09:09.913 00:09:09.913 ' 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:09.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.913 --rc genhtml_branch_coverage=1 00:09:09.913 --rc genhtml_function_coverage=1 00:09:09.913 --rc genhtml_legend=1 00:09:09.913 --rc geninfo_all_blocks=1 00:09:09.913 --rc geninfo_unexecuted_blocks=1 00:09:09.913 00:09:09.913 ' 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:09.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.913 --rc genhtml_branch_coverage=1 00:09:09.913 --rc genhtml_function_coverage=1 00:09:09.913 --rc genhtml_legend=1 00:09:09.913 --rc geninfo_all_blocks=1 00:09:09.913 --rc geninfo_unexecuted_blocks=1 00:09:09.913 00:09:09.913 ' 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:09.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.913 --rc genhtml_branch_coverage=1 00:09:09.913 --rc genhtml_function_coverage=1 00:09:09.913 --rc genhtml_legend=1 00:09:09.913 --rc geninfo_all_blocks=1 00:09:09.913 --rc geninfo_unexecuted_blocks=1 00:09:09.913 00:09:09.913 ' 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.913 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.914 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:09.914 Cannot find device "nvmf_init_br" 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:09.914 Cannot find device "nvmf_init_br2" 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:09.914 Cannot find device "nvmf_tgt_br" 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.914 Cannot find device "nvmf_tgt_br2" 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:09.914 Cannot find device "nvmf_init_br" 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:09.914 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:10.172 Cannot find device "nvmf_init_br2" 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:10.173 Cannot find device "nvmf_tgt_br" 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:10.173 Cannot find device "nvmf_tgt_br2" 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:10.173 Cannot find device "nvmf_br" 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:10.173 Cannot find device "nvmf_init_if" 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:10.173 Cannot find device "nvmf_init_if2" 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:10.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:10.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:10.173 07:36:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:10.173 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:10.431 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:10.431 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:10.431 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:10.431 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:10.431 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:10.432 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:10.432 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:09:10.432 00:09:10.432 --- 10.0.0.3 ping statistics --- 00:09:10.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.432 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:10.432 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:10.432 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:09:10.432 00:09:10.432 --- 10.0.0.4 ping statistics --- 00:09:10.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.432 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:10.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:10.432 00:09:10.432 --- 10.0.0.1 ping statistics --- 00:09:10.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.432 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:10.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:09:10.432 00:09:10.432 --- 10.0.0.2 ping statistics --- 00:09:10.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.432 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63914 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63914 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 63914 ']' 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:10.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:10.432 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.432 [2024-11-08 07:36:28.281563] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:09:10.432 [2024-11-08 07:36:28.281664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.691 [2024-11-08 07:36:28.431934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.691 [2024-11-08 07:36:28.476297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.691 [2024-11-08 07:36:28.476349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.691 [2024-11-08 07:36:28.476359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.691 [2024-11-08 07:36:28.476367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.691 [2024-11-08 07:36:28.476375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.691 [2024-11-08 07:36:28.477337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.691 [2024-11-08 07:36:28.477515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.691 [2024-11-08 07:36:28.477598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.691 [2024-11-08 07:36:28.477602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.691 [2024-11-08 07:36:28.631501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.691 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.691 [2024-11-08 07:36:28.646684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.950 Malloc0 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.950 [2024-11-08 07:36:28.706106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63936 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63938 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:10.950 { 00:09:10.950 "params": { 00:09:10.950 "name": "Nvme$subsystem", 00:09:10.950 "trtype": "$TEST_TRANSPORT", 00:09:10.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.950 "adrfam": "ipv4", 00:09:10.950 "trsvcid": "$NVMF_PORT", 00:09:10.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.950 "hdgst": ${hdgst:-false}, 00:09:10.950 "ddgst": ${ddgst:-false} 00:09:10.950 }, 00:09:10.950 "method": "bdev_nvme_attach_controller" 00:09:10.950 } 00:09:10.950 EOF 00:09:10.950 )") 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63940 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63943 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:10.950 { 00:09:10.950 "params": { 00:09:10.950 "name": "Nvme$subsystem", 00:09:10.950 "trtype": "$TEST_TRANSPORT", 00:09:10.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.950 "adrfam": "ipv4", 00:09:10.950 "trsvcid": "$NVMF_PORT", 00:09:10.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.950 "hdgst": ${hdgst:-false}, 00:09:10.950 "ddgst": ${ddgst:-false} 00:09:10.950 }, 00:09:10.950 "method": "bdev_nvme_attach_controller" 00:09:10.950 } 00:09:10.950 EOF 00:09:10.950 )") 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:10.950 { 00:09:10.950 "params": { 00:09:10.950 "name": "Nvme$subsystem", 00:09:10.950 "trtype": "$TEST_TRANSPORT", 00:09:10.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.950 "adrfam": "ipv4", 00:09:10.950 "trsvcid": "$NVMF_PORT", 00:09:10.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.950 "hdgst": ${hdgst:-false}, 00:09:10.950 "ddgst": ${ddgst:-false} 00:09:10.950 }, 00:09:10.950 "method": "bdev_nvme_attach_controller" 00:09:10.950 } 00:09:10.950 EOF 00:09:10.950 )") 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:10.950 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:10.950 { 00:09:10.950 "params": { 00:09:10.950 "name": "Nvme$subsystem", 00:09:10.950 "trtype": "$TEST_TRANSPORT", 00:09:10.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.950 "adrfam": "ipv4", 00:09:10.950 "trsvcid": "$NVMF_PORT", 00:09:10.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.950 "hdgst": ${hdgst:-false}, 00:09:10.950 "ddgst": ${ddgst:-false} 00:09:10.950 }, 00:09:10.950 "method": "bdev_nvme_attach_controller" 00:09:10.951 } 00:09:10.951 EOF 00:09:10.951 )") 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:10.951 "params": { 00:09:10.951 "name": "Nvme1", 00:09:10.951 "trtype": "tcp", 00:09:10.951 "traddr": "10.0.0.3", 00:09:10.951 "adrfam": "ipv4", 00:09:10.951 "trsvcid": "4420", 00:09:10.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.951 "hdgst": false, 00:09:10.951 "ddgst": false 00:09:10.951 }, 00:09:10.951 "method": "bdev_nvme_attach_controller" 00:09:10.951 }' 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:10.951 "params": { 00:09:10.951 "name": "Nvme1", 00:09:10.951 "trtype": "tcp", 00:09:10.951 "traddr": "10.0.0.3", 00:09:10.951 "adrfam": "ipv4", 00:09:10.951 "trsvcid": "4420", 00:09:10.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.951 "hdgst": false, 00:09:10.951 "ddgst": false 00:09:10.951 }, 00:09:10.951 "method": "bdev_nvme_attach_controller" 00:09:10.951 }' 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:10.951 "params": { 00:09:10.951 "name": "Nvme1", 00:09:10.951 "trtype": "tcp", 00:09:10.951 "traddr": "10.0.0.3", 00:09:10.951 "adrfam": "ipv4", 00:09:10.951 "trsvcid": "4420", 00:09:10.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.951 "hdgst": false, 00:09:10.951 "ddgst": false 00:09:10.951 }, 00:09:10.951 "method": "bdev_nvme_attach_controller" 00:09:10.951 }' 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:10.951 "params": { 00:09:10.951 "name": "Nvme1", 00:09:10.951 "trtype": "tcp", 00:09:10.951 "traddr": "10.0.0.3", 00:09:10.951 "adrfam": "ipv4", 00:09:10.951 "trsvcid": "4420", 00:09:10.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.951 "hdgst": false, 00:09:10.951 "ddgst": false 00:09:10.951 }, 00:09:10.951 "method": "bdev_nvme_attach_controller" 00:09:10.951 }' 00:09:10.951 [2024-11-08 07:36:28.776686] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:09:10.951 [2024-11-08 07:36:28.776776] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:10.951 [2024-11-08 07:36:28.779022] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:09:10.951 [2024-11-08 07:36:28.779100] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:10.951 07:36:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63936 00:09:10.951 [2024-11-08 07:36:28.788442] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:09:10.951 [2024-11-08 07:36:28.788554] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:10.951 [2024-11-08 07:36:28.807186] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:09:10.951 [2024-11-08 07:36:28.807674] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:11.210 [2024-11-08 07:36:28.992438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.210 [2024-11-08 07:36:29.044904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:11.210 [2024-11-08 07:36:29.053968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.210 [2024-11-08 07:36:29.059259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.210 [2024-11-08 07:36:29.106874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:11.210 [2024-11-08 07:36:29.112214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.210 [2024-11-08 07:36:29.120968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.210 [2024-11-08 07:36:29.164408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:11.468 [2024-11-08 07:36:29.178417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.468 [2024-11-08 07:36:29.181372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.468 Running I/O for 1 seconds... 00:09:11.468 Running I/O for 1 seconds... 00:09:11.468 [2024-11-08 07:36:29.234234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:11.468 [2024-11-08 07:36:29.248501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.468 Running I/O for 1 seconds... 00:09:11.468 Running I/O for 1 seconds... 00:09:12.402 207272.00 IOPS, 809.66 MiB/s 00:09:12.402 Latency(us) 00:09:12.402 [2024-11-08T07:36:30.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.402 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:12.402 Nvme1n1 : 1.00 206942.48 808.37 0.00 0.00 615.78 300.37 1575.98 00:09:12.402 [2024-11-08T07:36:30.363Z] =================================================================================================================== 00:09:12.402 [2024-11-08T07:36:30.363Z] Total : 206942.48 808.37 0.00 0.00 615.78 300.37 1575.98 00:09:12.402 11135.00 IOPS, 43.50 MiB/s 00:09:12.402 Latency(us) 00:09:12.402 [2024-11-08T07:36:30.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.402 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:12.402 Nvme1n1 : 1.01 11174.67 43.65 0.00 0.00 11406.07 6990.51 17850.76 00:09:12.402 [2024-11-08T07:36:30.363Z] =================================================================================================================== 00:09:12.402 [2024-11-08T07:36:30.363Z] Total : 11174.67 43.65 0.00 0.00 11406.07 6990.51 17850.76 00:09:12.402 8925.00 IOPS, 34.86 MiB/s 00:09:12.402 Latency(us) 00:09:12.402 [2024-11-08T07:36:30.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.402 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:12.402 Nvme1n1 : 1.01 8988.13 35.11 0.00 0.00 14178.56 5523.75 22968.81 00:09:12.402 [2024-11-08T07:36:30.363Z] =================================================================================================================== 00:09:12.402 [2024-11-08T07:36:30.363Z] Total : 8988.13 35.11 0.00 0.00 14178.56 5523.75 22968.81 00:09:12.661 10336.00 IOPS, 40.38 MiB/s 00:09:12.661 Latency(us) 00:09:12.661 [2024-11-08T07:36:30.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.661 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:12.661 Nvme1n1 : 1.01 10451.49 40.83 0.00 0.00 12214.80 4181.82 20721.86 00:09:12.661 [2024-11-08T07:36:30.622Z] =================================================================================================================== 00:09:12.661 [2024-11-08T07:36:30.622Z] Total : 10451.49 40.83 0.00 0.00 12214.80 4181.82 20721.86 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63938 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63940 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63943 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:12.661 rmmod nvme_tcp 00:09:12.661 rmmod nvme_fabrics 00:09:12.661 rmmod nvme_keyring 00:09:12.661 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63914 ']' 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63914 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 63914 ']' 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 63914 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63914 00:09:12.920 killing process with pid 63914 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63914' 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 63914 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 63914 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:12.920 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:13.178 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:13.178 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:13.178 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:13.178 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:13.178 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:13.178 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:13.178 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:13.178 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:13.178 07:36:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:13.178 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:13.178 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.178 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.178 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.178 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:13.178 00:09:13.178 real 0m3.479s 00:09:13.178 user 0m13.631s 00:09:13.178 sys 0m2.337s 00:09:13.178 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:13.178 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.178 ************************************ 00:09:13.178 END TEST nvmf_bdev_io_wait 00:09:13.178 ************************************ 00:09:13.178 07:36:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:13.178 07:36:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:13.178 07:36:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:13.178 07:36:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:13.178 ************************************ 00:09:13.178 START TEST nvmf_queue_depth 00:09:13.178 ************************************ 00:09:13.178 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:13.501 * Looking for test storage... 00:09:13.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:13.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.501 --rc genhtml_branch_coverage=1 00:09:13.501 --rc genhtml_function_coverage=1 00:09:13.501 --rc genhtml_legend=1 00:09:13.501 --rc geninfo_all_blocks=1 00:09:13.501 --rc geninfo_unexecuted_blocks=1 00:09:13.501 00:09:13.501 ' 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:13.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.501 --rc genhtml_branch_coverage=1 00:09:13.501 --rc genhtml_function_coverage=1 00:09:13.501 --rc genhtml_legend=1 00:09:13.501 --rc geninfo_all_blocks=1 00:09:13.501 --rc geninfo_unexecuted_blocks=1 00:09:13.501 00:09:13.501 ' 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:13.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.501 --rc genhtml_branch_coverage=1 00:09:13.501 --rc genhtml_function_coverage=1 00:09:13.501 --rc genhtml_legend=1 00:09:13.501 --rc geninfo_all_blocks=1 00:09:13.501 --rc geninfo_unexecuted_blocks=1 00:09:13.501 00:09:13.501 ' 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:13.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.501 --rc genhtml_branch_coverage=1 00:09:13.501 --rc genhtml_function_coverage=1 00:09:13.501 --rc genhtml_legend=1 00:09:13.501 --rc geninfo_all_blocks=1 00:09:13.501 --rc geninfo_unexecuted_blocks=1 00:09:13.501 00:09:13.501 ' 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.501 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:13.502 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:13.502 Cannot find device "nvmf_init_br" 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:13.502 Cannot find device "nvmf_init_br2" 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:13.502 Cannot find device "nvmf_tgt_br" 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:13.502 Cannot find device "nvmf_tgt_br2" 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:13.502 Cannot find device "nvmf_init_br" 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:13.502 Cannot find device "nvmf_init_br2" 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:13.502 Cannot find device "nvmf_tgt_br" 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:13.502 Cannot find device "nvmf_tgt_br2" 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:13.502 Cannot find device "nvmf_br" 00:09:13.502 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:13.761 Cannot find device "nvmf_init_if" 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:13.761 Cannot find device "nvmf_init_if2" 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:13.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:13.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:13.761 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:14.020 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:14.020 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:09:14.020 00:09:14.020 --- 10.0.0.3 ping statistics --- 00:09:14.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.020 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:14.020 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:14.020 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:09:14.020 00:09:14.020 --- 10.0.0.4 ping statistics --- 00:09:14.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.020 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:14.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:14.020 00:09:14.020 --- 10.0.0.1 ping statistics --- 00:09:14.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.020 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:14.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:09:14.020 00:09:14.020 --- 10.0.0.2 ping statistics --- 00:09:14.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.020 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64203 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64203 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64203 ']' 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:14.020 07:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.020 [2024-11-08 07:36:31.832237] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:09:14.021 [2024-11-08 07:36:31.832345] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.280 [2024-11-08 07:36:31.999086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.280 [2024-11-08 07:36:32.058788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.280 [2024-11-08 07:36:32.058841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.280 [2024-11-08 07:36:32.058857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.280 [2024-11-08 07:36:32.058870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.280 [2024-11-08 07:36:32.058881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.280 [2024-11-08 07:36:32.059264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.280 [2024-11-08 07:36:32.106890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.216 [2024-11-08 07:36:32.875036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.216 Malloc0 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.216 [2024-11-08 07:36:32.925783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64240 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64240 /var/tmp/bdevperf.sock 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64240 ']' 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:15.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:15.216 07:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.216 [2024-11-08 07:36:32.974889] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:09:15.216 [2024-11-08 07:36:32.974966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64240 ] 00:09:15.216 [2024-11-08 07:36:33.126425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.475 [2024-11-08 07:36:33.189929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.475 [2024-11-08 07:36:33.237843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.475 07:36:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:15.475 07:36:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:15.475 07:36:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:15.475 07:36:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.475 07:36:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.475 NVMe0n1 00:09:15.475 07:36:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.475 07:36:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:15.733 Running I/O for 10 seconds... 00:09:17.606 9044.00 IOPS, 35.33 MiB/s [2024-11-08T07:36:36.945Z] 9521.00 IOPS, 37.19 MiB/s [2024-11-08T07:36:37.512Z] 9913.67 IOPS, 38.73 MiB/s [2024-11-08T07:36:38.910Z] 10081.25 IOPS, 39.38 MiB/s [2024-11-08T07:36:39.846Z] 10258.00 IOPS, 40.07 MiB/s [2024-11-08T07:36:40.782Z] 10378.50 IOPS, 40.54 MiB/s [2024-11-08T07:36:41.721Z] 10434.57 IOPS, 40.76 MiB/s [2024-11-08T07:36:42.659Z] 10502.88 IOPS, 41.03 MiB/s [2024-11-08T07:36:43.594Z] 10529.56 IOPS, 41.13 MiB/s [2024-11-08T07:36:43.594Z] 10561.20 IOPS, 41.25 MiB/s 00:09:25.633 Latency(us) 00:09:25.633 [2024-11-08T07:36:43.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.633 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:25.633 Verification LBA range: start 0x0 length 0x4000 00:09:25.633 NVMe0n1 : 10.06 10599.54 41.40 0.00 0.00 96295.07 16477.62 71403.03 00:09:25.633 [2024-11-08T07:36:43.595Z] =================================================================================================================== 00:09:25.634 [2024-11-08T07:36:43.595Z] Total : 10599.54 41.40 0.00 0.00 96295.07 16477.62 71403.03 00:09:25.634 { 00:09:25.634 "results": [ 00:09:25.634 { 00:09:25.634 "job": "NVMe0n1", 00:09:25.634 "core_mask": "0x1", 00:09:25.634 "workload": "verify", 00:09:25.634 "status": "finished", 00:09:25.634 "verify_range": { 00:09:25.634 "start": 0, 00:09:25.634 "length": 16384 00:09:25.634 }, 00:09:25.634 "queue_depth": 1024, 00:09:25.634 "io_size": 4096, 00:09:25.634 "runtime": 10.059869, 00:09:25.634 "iops": 10599.541604368804, 00:09:25.634 "mibps": 41.40445939206564, 00:09:25.634 "io_failed": 0, 00:09:25.634 "io_timeout": 0, 00:09:25.634 "avg_latency_us": 96295.07105810479, 00:09:25.634 "min_latency_us": 16477.62285714286, 00:09:25.634 "max_latency_us": 71403.03238095238 00:09:25.634 } 00:09:25.634 ], 00:09:25.634 "core_count": 1 00:09:25.634 } 00:09:25.634 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64240 00:09:25.634 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64240 ']' 00:09:25.634 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64240 00:09:25.634 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:25.634 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:25.892 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64240 00:09:25.892 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:25.892 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:25.892 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64240' 00:09:25.892 killing process with pid 64240 00:09:25.892 Received shutdown signal, test time was about 10.000000 seconds 00:09:25.892 00:09:25.892 Latency(us) 00:09:25.892 [2024-11-08T07:36:43.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.892 [2024-11-08T07:36:43.853Z] =================================================================================================================== 00:09:25.892 [2024-11-08T07:36:43.853Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:25.892 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64240 00:09:25.892 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64240 00:09:25.892 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:25.892 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:25.892 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.892 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:25.892 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.892 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:25.892 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.892 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.151 rmmod nvme_tcp 00:09:26.151 rmmod nvme_fabrics 00:09:26.151 rmmod nvme_keyring 00:09:26.151 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.151 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:26.151 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:26.151 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64203 ']' 00:09:26.151 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64203 00:09:26.151 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64203 ']' 00:09:26.151 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64203 00:09:26.151 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:26.152 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:26.152 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64203 00:09:26.152 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:26.152 killing process with pid 64203 00:09:26.152 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:26.152 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64203' 00:09:26.152 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64203 00:09:26.152 07:36:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64203 00:09:26.152 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.152 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.152 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.152 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:26.152 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.152 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:26.152 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:26.411 00:09:26.411 real 0m13.248s 00:09:26.411 user 0m21.794s 00:09:26.411 sys 0m2.728s 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.411 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:26.411 ************************************ 00:09:26.411 END TEST nvmf_queue_depth 00:09:26.411 ************************************ 00:09:26.670 07:36:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:26.670 07:36:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:26.670 07:36:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.670 07:36:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.670 ************************************ 00:09:26.670 START TEST nvmf_target_multipath 00:09:26.670 ************************************ 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:26.671 * Looking for test storage... 00:09:26.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.671 --rc genhtml_branch_coverage=1 00:09:26.671 --rc genhtml_function_coverage=1 00:09:26.671 --rc genhtml_legend=1 00:09:26.671 --rc geninfo_all_blocks=1 00:09:26.671 --rc geninfo_unexecuted_blocks=1 00:09:26.671 00:09:26.671 ' 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.671 --rc genhtml_branch_coverage=1 00:09:26.671 --rc genhtml_function_coverage=1 00:09:26.671 --rc genhtml_legend=1 00:09:26.671 --rc geninfo_all_blocks=1 00:09:26.671 --rc geninfo_unexecuted_blocks=1 00:09:26.671 00:09:26.671 ' 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.671 --rc genhtml_branch_coverage=1 00:09:26.671 --rc genhtml_function_coverage=1 00:09:26.671 --rc genhtml_legend=1 00:09:26.671 --rc geninfo_all_blocks=1 00:09:26.671 --rc geninfo_unexecuted_blocks=1 00:09:26.671 00:09:26.671 ' 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:26.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.671 --rc genhtml_branch_coverage=1 00:09:26.671 --rc genhtml_function_coverage=1 00:09:26.671 --rc genhtml_legend=1 00:09:26.671 --rc geninfo_all_blocks=1 00:09:26.671 --rc geninfo_unexecuted_blocks=1 00:09:26.671 00:09:26.671 ' 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.671 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.953 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.954 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:26.954 Cannot find device "nvmf_init_br" 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:26.954 Cannot find device "nvmf_init_br2" 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:26.954 Cannot find device "nvmf_tgt_br" 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:26.954 Cannot find device "nvmf_tgt_br2" 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:26.954 Cannot find device "nvmf_init_br" 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:26.954 Cannot find device "nvmf_init_br2" 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:26.954 Cannot find device "nvmf_tgt_br" 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:26.954 Cannot find device "nvmf_tgt_br2" 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:26.954 Cannot find device "nvmf_br" 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:26.954 Cannot find device "nvmf_init_if" 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:26.954 Cannot find device "nvmf_init_if2" 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:26.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:26.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:26.954 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:27.213 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:27.213 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:27.213 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:27.213 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:27.213 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:27.213 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:27.213 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:27.213 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:27.213 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:27.213 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:27.213 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:27.213 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:27.213 07:36:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:27.213 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:27.213 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:09:27.213 00:09:27.213 --- 10.0.0.3 ping statistics --- 00:09:27.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.213 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:27.213 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:27.213 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:09:27.213 00:09:27.213 --- 10.0.0.4 ping statistics --- 00:09:27.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.213 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:27.213 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:27.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:27.214 00:09:27.214 --- 10.0.0.1 ping statistics --- 00:09:27.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.214 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:27.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:27.214 00:09:27.214 --- 10.0.0.2 ping statistics --- 00:09:27.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.214 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64604 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64604 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 64604 ']' 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:27.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:27.214 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:27.471 [2024-11-08 07:36:45.209090] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:09:27.472 [2024-11-08 07:36:45.209182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.472 [2024-11-08 07:36:45.369373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.472 [2024-11-08 07:36:45.426892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.472 [2024-11-08 07:36:45.426959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.472 [2024-11-08 07:36:45.426975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.472 [2024-11-08 07:36:45.427000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.472 [2024-11-08 07:36:45.427011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.472 [2024-11-08 07:36:45.428118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.472 [2024-11-08 07:36:45.428164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.472 [2024-11-08 07:36:45.428250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.729 [2024-11-08 07:36:45.428258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.729 [2024-11-08 07:36:45.476185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.729 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:27.729 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:09:27.729 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:27.729 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:27.729 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:27.729 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.729 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:27.987 [2024-11-08 07:36:45.863912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.987 07:36:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:28.244 Malloc0 00:09:28.244 07:36:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:28.502 07:36:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.759 07:36:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:29.016 [2024-11-08 07:36:46.875595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:29.016 07:36:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:29.273 [2024-11-08 07:36:47.071781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:29.273 07:36:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid=b4f53fcb-853f-493d-bd98-9a37948dacaf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:29.273 07:36:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid=b4f53fcb-853f-493d-bd98-9a37948dacaf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:29.530 07:36:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:29.530 07:36:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:09:29.530 07:36:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.530 07:36:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:29.530 07:36:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:09:31.430 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:31.430 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:31.430 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:31.430 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:31.430 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:31.430 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:09:31.430 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64692 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:31.688 07:36:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:31.688 [global] 00:09:31.688 thread=1 00:09:31.688 invalidate=1 00:09:31.688 rw=randrw 00:09:31.688 time_based=1 00:09:31.688 runtime=6 00:09:31.688 ioengine=libaio 00:09:31.688 direct=1 00:09:31.688 bs=4096 00:09:31.688 iodepth=128 00:09:31.688 norandommap=0 00:09:31.688 numjobs=1 00:09:31.688 00:09:31.688 verify_dump=1 00:09:31.688 verify_backlog=512 00:09:31.688 verify_state_save=0 00:09:31.688 do_verify=1 00:09:31.688 verify=crc32c-intel 00:09:31.688 [job0] 00:09:31.688 filename=/dev/nvme0n1 00:09:31.688 Could not set queue depth (nvme0n1) 00:09:31.688 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:31.688 fio-3.35 00:09:31.688 Starting 1 thread 00:09:32.620 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:32.878 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:33.143 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:33.143 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:33.143 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:33.143 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:33.143 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:33.143 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:33.143 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:33.143 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:33.143 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:33.143 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:33.143 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:33.143 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:33.143 07:36:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:33.401 07:36:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:33.401 07:36:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:33.401 07:36:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:33.401 07:36:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:33.401 07:36:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:33.401 07:36:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:33.401 07:36:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:33.401 07:36:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:33.401 07:36:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:33.401 07:36:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:33.401 07:36:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:33.401 07:36:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:33.401 07:36:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:33.401 07:36:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64692 00:09:38.738 00:09:38.738 job0: (groupid=0, jobs=1): err= 0: pid=64713: Fri Nov 8 07:36:55 2024 00:09:38.738 read: IOPS=13.1k, BW=51.3MiB/s (53.8MB/s)(308MiB/6004msec) 00:09:38.738 slat (usec): min=4, max=5414, avg=44.83, stdev=174.06 00:09:38.738 clat (usec): min=1100, max=13813, avg=6695.32, stdev=1173.48 00:09:38.738 lat (usec): min=1116, max=13827, avg=6740.15, stdev=1177.16 00:09:38.738 clat percentiles (usec): 00:09:38.738 | 1.00th=[ 3589], 5.00th=[ 5145], 10.00th=[ 5800], 20.00th=[ 6128], 00:09:38.738 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6521], 60.00th=[ 6652], 00:09:38.738 | 70.00th=[ 6849], 80.00th=[ 7111], 90.00th=[ 7635], 95.00th=[ 9634], 00:09:38.738 | 99.00th=[10552], 99.50th=[10814], 99.90th=[11207], 99.95th=[11338], 00:09:38.738 | 99.99th=[11600] 00:09:38.738 bw ( KiB/s): min=16080, max=32496, per=51.04%, avg=26834.73, stdev=5880.23, samples=11 00:09:38.738 iops : min= 4020, max= 8124, avg=6708.64, stdev=1470.06, samples=11 00:09:38.738 write: IOPS=7461, BW=29.1MiB/s (30.6MB/s)(157MiB/5373msec); 0 zone resets 00:09:38.738 slat (usec): min=15, max=1413, avg=52.01, stdev=114.60 00:09:38.738 clat (usec): min=1066, max=11743, avg=5808.05, stdev=1029.24 00:09:38.738 lat (usec): min=1094, max=11767, avg=5860.06, stdev=1032.87 00:09:38.738 clat percentiles (usec): 00:09:38.738 | 1.00th=[ 2966], 5.00th=[ 3458], 10.00th=[ 4228], 20.00th=[ 5407], 00:09:38.738 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5997], 60.00th=[ 6128], 00:09:38.738 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6652], 95.00th=[ 6849], 00:09:38.738 | 99.00th=[ 9110], 99.50th=[ 9503], 99.90th=[10552], 99.95th=[10945], 00:09:38.738 | 99.99th=[11469] 00:09:38.738 bw ( KiB/s): min=17032, max=32120, per=89.68%, avg=26764.82, stdev=5388.08, samples=11 00:09:38.738 iops : min= 4258, max= 8030, avg=6691.18, stdev=1347.01, samples=11 00:09:38.738 lat (msec) : 2=0.05%, 4=4.20%, 10=93.80%, 20=1.95% 00:09:38.738 cpu : usr=5.56%, sys=23.68%, ctx=6877, majf=0, minf=54 00:09:38.738 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:38.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.738 issued rwts: total=78910,40088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.738 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.738 00:09:38.738 Run status group 0 (all jobs): 00:09:38.738 READ: bw=51.3MiB/s (53.8MB/s), 51.3MiB/s-51.3MiB/s (53.8MB/s-53.8MB/s), io=308MiB (323MB), run=6004-6004msec 00:09:38.738 WRITE: bw=29.1MiB/s (30.6MB/s), 29.1MiB/s-29.1MiB/s (30.6MB/s-30.6MB/s), io=157MiB (164MB), run=5373-5373msec 00:09:38.738 00:09:38.738 Disk stats (read/write): 00:09:38.738 nvme0n1: ios=77726/39376, merge=0/0, ticks=497980/213916, in_queue=711896, util=98.63% 00:09:38.738 07:36:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64788 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:38.738 07:36:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:38.738 [global] 00:09:38.738 thread=1 00:09:38.738 invalidate=1 00:09:38.738 rw=randrw 00:09:38.738 time_based=1 00:09:38.738 runtime=6 00:09:38.738 ioengine=libaio 00:09:38.738 direct=1 00:09:38.738 bs=4096 00:09:38.738 iodepth=128 00:09:38.738 norandommap=0 00:09:38.738 numjobs=1 00:09:38.738 00:09:38.738 verify_dump=1 00:09:38.739 verify_backlog=512 00:09:38.739 verify_state_save=0 00:09:38.739 do_verify=1 00:09:38.739 verify=crc32c-intel 00:09:38.739 [job0] 00:09:38.739 filename=/dev/nvme0n1 00:09:38.739 Could not set queue depth (nvme0n1) 00:09:38.739 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:38.739 fio-3.35 00:09:38.739 Starting 1 thread 00:09:39.702 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:39.977 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:40.235 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:40.235 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:40.235 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:40.236 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:40.236 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:40.236 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:40.236 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:40.236 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:40.236 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:40.236 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:40.236 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:40.236 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:40.236 07:36:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:40.236 07:36:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:40.494 07:36:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:40.494 07:36:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:40.494 07:36:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:40.494 07:36:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:40.494 07:36:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:40.494 07:36:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:40.494 07:36:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:40.494 07:36:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:40.494 07:36:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:40.494 07:36:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:40.494 07:36:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:40.494 07:36:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:40.494 07:36:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64788 00:09:45.763 00:09:45.763 job0: (groupid=0, jobs=1): err= 0: pid=64809: Fri Nov 8 07:37:02 2024 00:09:45.763 read: IOPS=14.2k, BW=55.3MiB/s (58.0MB/s)(332MiB/6002msec) 00:09:45.763 slat (usec): min=5, max=5118, avg=36.70, stdev=145.44 00:09:45.763 clat (usec): min=772, max=11900, avg=6266.98, stdev=1325.99 00:09:45.763 lat (usec): min=783, max=11931, avg=6303.68, stdev=1334.14 00:09:45.763 clat percentiles (usec): 00:09:45.763 | 1.00th=[ 2540], 5.00th=[ 3752], 10.00th=[ 4490], 20.00th=[ 5604], 00:09:45.763 | 30.00th=[ 6063], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:09:45.763 | 70.00th=[ 6652], 80.00th=[ 6915], 90.00th=[ 7308], 95.00th=[ 8225], 00:09:45.763 | 99.00th=[10290], 99.50th=[10421], 99.90th=[10945], 99.95th=[11338], 00:09:45.763 | 99.99th=[11731] 00:09:45.763 bw ( KiB/s): min=11296, max=48487, per=50.89%, avg=28846.45, stdev=12687.15, samples=11 00:09:45.763 iops : min= 2824, max=12121, avg=7211.55, stdev=3171.67, samples=11 00:09:45.763 write: IOPS=8858, BW=34.6MiB/s (36.3MB/s)(171MiB/4942msec); 0 zone resets 00:09:45.763 slat (usec): min=9, max=1591, avg=43.28, stdev=98.29 00:09:45.763 clat (usec): min=1214, max=11421, avg=5223.15, stdev=1307.77 00:09:45.763 lat (usec): min=1237, max=11446, avg=5266.43, stdev=1319.44 00:09:45.763 clat percentiles (usec): 00:09:45.763 | 1.00th=[ 2311], 5.00th=[ 2900], 10.00th=[ 3261], 20.00th=[ 3851], 00:09:45.763 | 30.00th=[ 4490], 40.00th=[ 5342], 50.00th=[ 5669], 60.00th=[ 5866], 00:09:45.763 | 70.00th=[ 6063], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6718], 00:09:45.763 | 99.00th=[ 8455], 99.50th=[ 9110], 99.90th=[10159], 99.95th=[10421], 00:09:45.763 | 99.99th=[10814] 00:09:45.763 bw ( KiB/s): min=11544, max=49053, per=81.42%, avg=28852.09, stdev=12462.80, samples=11 00:09:45.763 iops : min= 2886, max=12263, avg=7213.00, stdev=3115.66, samples=11 00:09:45.763 lat (usec) : 1000=0.02% 00:09:45.763 lat (msec) : 2=0.36%, 4=11.51%, 10=86.95%, 20=1.16% 00:09:45.763 cpu : usr=5.80%, sys=24.08%, ctx=7629, majf=0, minf=151 00:09:45.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:45.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.763 issued rwts: total=85045,43780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.763 00:09:45.763 Run status group 0 (all jobs): 00:09:45.763 READ: bw=55.3MiB/s (58.0MB/s), 55.3MiB/s-55.3MiB/s (58.0MB/s-58.0MB/s), io=332MiB (348MB), run=6002-6002msec 00:09:45.763 WRITE: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=171MiB (179MB), run=4942-4942msec 00:09:45.763 00:09:45.763 Disk stats (read/write): 00:09:45.763 nvme0n1: ios=84219/42806, merge=0/0, ticks=505831/208050, in_queue=713881, util=98.66% 00:09:45.763 07:37:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:45.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:45.763 07:37:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:45.763 07:37:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:09:45.763 07:37:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:45.763 07:37:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.763 07:37:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:45.763 07:37:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.763 07:37:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:09:45.763 07:37:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.763 rmmod nvme_tcp 00:09:45.763 rmmod nvme_fabrics 00:09:45.763 rmmod nvme_keyring 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64604 ']' 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64604 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 64604 ']' 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 64604 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64604 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:45.763 killing process with pid 64604 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64604' 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 64604 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 64604 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:45.763 ************************************ 00:09:45.763 END TEST nvmf_target_multipath 00:09:45.763 ************************************ 00:09:45.763 00:09:45.763 real 0m19.274s 00:09:45.763 user 1m8.891s 00:09:45.763 sys 0m11.642s 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:45.763 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.023 ************************************ 00:09:46.023 START TEST nvmf_zcopy 00:09:46.023 ************************************ 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:46.023 * Looking for test storage... 00:09:46.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.023 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:46.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.024 --rc genhtml_branch_coverage=1 00:09:46.024 --rc genhtml_function_coverage=1 00:09:46.024 --rc genhtml_legend=1 00:09:46.024 --rc geninfo_all_blocks=1 00:09:46.024 --rc geninfo_unexecuted_blocks=1 00:09:46.024 00:09:46.024 ' 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:46.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.024 --rc genhtml_branch_coverage=1 00:09:46.024 --rc genhtml_function_coverage=1 00:09:46.024 --rc genhtml_legend=1 00:09:46.024 --rc geninfo_all_blocks=1 00:09:46.024 --rc geninfo_unexecuted_blocks=1 00:09:46.024 00:09:46.024 ' 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:46.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.024 --rc genhtml_branch_coverage=1 00:09:46.024 --rc genhtml_function_coverage=1 00:09:46.024 --rc genhtml_legend=1 00:09:46.024 --rc geninfo_all_blocks=1 00:09:46.024 --rc geninfo_unexecuted_blocks=1 00:09:46.024 00:09:46.024 ' 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:46.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.024 --rc genhtml_branch_coverage=1 00:09:46.024 --rc genhtml_function_coverage=1 00:09:46.024 --rc genhtml_legend=1 00:09:46.024 --rc geninfo_all_blocks=1 00:09:46.024 --rc geninfo_unexecuted_blocks=1 00:09:46.024 00:09:46.024 ' 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.024 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.024 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.315 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:46.315 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:46.315 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.315 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:46.315 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:46.315 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:46.315 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.315 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.315 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:46.316 07:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:46.316 Cannot find device "nvmf_init_br" 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:46.316 Cannot find device "nvmf_init_br2" 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:46.316 Cannot find device "nvmf_tgt_br" 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.316 Cannot find device "nvmf_tgt_br2" 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:46.316 Cannot find device "nvmf_init_br" 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:46.316 Cannot find device "nvmf_init_br2" 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:46.316 Cannot find device "nvmf_tgt_br" 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:46.316 Cannot find device "nvmf_tgt_br2" 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:46.316 Cannot find device "nvmf_br" 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:46.316 Cannot find device "nvmf_init_if" 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:46.316 Cannot find device "nvmf_init_if2" 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:46.316 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:46.316 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:46.316 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:46.580 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:46.580 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:09:46.580 00:09:46.580 --- 10.0.0.3 ping statistics --- 00:09:46.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.580 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:46.580 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:46.580 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:09:46.580 00:09:46.580 --- 10.0.0.4 ping statistics --- 00:09:46.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.580 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:46.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:46.580 00:09:46.580 --- 10.0.0.1 ping statistics --- 00:09:46.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.580 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:46.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:09:46.580 00:09:46.580 --- 10.0.0.2 ping statistics --- 00:09:46.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.580 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65118 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65118 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 65118 ']' 00:09:46.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:46.580 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.580 [2024-11-08 07:37:04.515995] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:09:46.580 [2024-11-08 07:37:04.516079] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.839 [2024-11-08 07:37:04.672322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.839 [2024-11-08 07:37:04.726405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.839 [2024-11-08 07:37:04.726478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.839 [2024-11-08 07:37:04.726494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.839 [2024-11-08 07:37:04.726508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.839 [2024-11-08 07:37:04.726518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.839 [2024-11-08 07:37:04.726884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.839 [2024-11-08 07:37:04.773060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.098 [2024-11-08 07:37:04.889326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.098 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.099 [2024-11-08 07:37:04.905439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.099 malloc0 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:47.099 { 00:09:47.099 "params": { 00:09:47.099 "name": "Nvme$subsystem", 00:09:47.099 "trtype": "$TEST_TRANSPORT", 00:09:47.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.099 "adrfam": "ipv4", 00:09:47.099 "trsvcid": "$NVMF_PORT", 00:09:47.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.099 "hdgst": ${hdgst:-false}, 00:09:47.099 "ddgst": ${ddgst:-false} 00:09:47.099 }, 00:09:47.099 "method": "bdev_nvme_attach_controller" 00:09:47.099 } 00:09:47.099 EOF 00:09:47.099 )") 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:47.099 07:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:47.099 "params": { 00:09:47.099 "name": "Nvme1", 00:09:47.099 "trtype": "tcp", 00:09:47.099 "traddr": "10.0.0.3", 00:09:47.099 "adrfam": "ipv4", 00:09:47.099 "trsvcid": "4420", 00:09:47.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.099 "hdgst": false, 00:09:47.099 "ddgst": false 00:09:47.099 }, 00:09:47.099 "method": "bdev_nvme_attach_controller" 00:09:47.099 }' 00:09:47.099 [2024-11-08 07:37:04.996174] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:09:47.099 [2024-11-08 07:37:04.996453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65144 ] 00:09:47.358 [2024-11-08 07:37:05.144858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.358 [2024-11-08 07:37:05.195202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.358 [2024-11-08 07:37:05.245000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:47.617 Running I/O for 10 seconds... 00:09:49.492 7958.00 IOPS, 62.17 MiB/s [2024-11-08T07:37:08.391Z] 8018.50 IOPS, 62.64 MiB/s [2024-11-08T07:37:09.769Z] 8040.67 IOPS, 62.82 MiB/s [2024-11-08T07:37:10.709Z] 8039.00 IOPS, 62.80 MiB/s [2024-11-08T07:37:11.644Z] 8042.40 IOPS, 62.83 MiB/s [2024-11-08T07:37:12.581Z] 8049.50 IOPS, 62.89 MiB/s [2024-11-08T07:37:13.517Z] 8057.00 IOPS, 62.95 MiB/s [2024-11-08T07:37:14.454Z] 8051.50 IOPS, 62.90 MiB/s [2024-11-08T07:37:15.390Z] 8043.78 IOPS, 62.84 MiB/s [2024-11-08T07:37:15.391Z] 8036.40 IOPS, 62.78 MiB/s 00:09:57.430 Latency(us) 00:09:57.430 [2024-11-08T07:37:15.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.430 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:57.430 Verification LBA range: start 0x0 length 0x1000 00:09:57.430 Nvme1n1 : 10.01 8037.64 62.79 0.00 0.00 15880.77 1505.77 24716.43 00:09:57.430 [2024-11-08T07:37:15.391Z] =================================================================================================================== 00:09:57.430 [2024-11-08T07:37:15.391Z] Total : 8037.64 62.79 0.00 0.00 15880.77 1505.77 24716.43 00:09:57.688 07:37:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:57.688 07:37:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65261 00:09:57.688 07:37:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:57.688 07:37:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.688 07:37:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:57.688 07:37:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:57.688 07:37:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:57.688 07:37:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:57.688 07:37:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:57.688 { 00:09:57.688 "params": { 00:09:57.688 "name": "Nvme$subsystem", 00:09:57.688 "trtype": "$TEST_TRANSPORT", 00:09:57.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.688 "adrfam": "ipv4", 00:09:57.688 "trsvcid": "$NVMF_PORT", 00:09:57.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.688 "hdgst": ${hdgst:-false}, 00:09:57.688 "ddgst": ${ddgst:-false} 00:09:57.688 }, 00:09:57.688 "method": "bdev_nvme_attach_controller" 00:09:57.688 } 00:09:57.688 EOF 00:09:57.688 )") 00:09:57.688 07:37:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:57.688 [2024-11-08 07:37:15.542052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.689 [2024-11-08 07:37:15.542095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.689 07:37:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:57.689 07:37:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:57.689 07:37:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:57.689 "params": { 00:09:57.689 "name": "Nvme1", 00:09:57.689 "trtype": "tcp", 00:09:57.689 "traddr": "10.0.0.3", 00:09:57.689 "adrfam": "ipv4", 00:09:57.689 "trsvcid": "4420", 00:09:57.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.689 "hdgst": false, 00:09:57.689 "ddgst": false 00:09:57.689 }, 00:09:57.689 "method": "bdev_nvme_attach_controller" 00:09:57.689 }' 00:09:57.689 [2024-11-08 07:37:15.554019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.689 [2024-11-08 07:37:15.554046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.689 [2024-11-08 07:37:15.566012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.689 [2024-11-08 07:37:15.566040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.689 [2024-11-08 07:37:15.568597] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:09:57.689 [2024-11-08 07:37:15.568662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65261 ] 00:09:57.689 [2024-11-08 07:37:15.578008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.689 [2024-11-08 07:37:15.579248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.689 [2024-11-08 07:37:15.590020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.689 [2024-11-08 07:37:15.590136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.689 [2024-11-08 07:37:15.602017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.689 [2024-11-08 07:37:15.602128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.689 [2024-11-08 07:37:15.614034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.689 [2024-11-08 07:37:15.614146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.689 [2024-11-08 07:37:15.626035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.689 [2024-11-08 07:37:15.626141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.689 [2024-11-08 07:37:15.638020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.689 [2024-11-08 07:37:15.638139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.650025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.650147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.662031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.662143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.674034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.674156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.686041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.686196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.698039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.698177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.707756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.948 [2024-11-08 07:37:15.710051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.710173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.722048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.722218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.734046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.734159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.746048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.746159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.757747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.948 [2024-11-08 07:37:15.758051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.758151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.770064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.770188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.782084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.782247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.794078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.794224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.806077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.806105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.807310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:57.948 [2024-11-08 07:37:15.818075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.818105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.830071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.830095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.842232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.842263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.854197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.854226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.866200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.866229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.878220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.878247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.890230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.890257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.948 [2024-11-08 07:37:15.902258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.948 [2024-11-08 07:37:15.902292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.207 [2024-11-08 07:37:15.914253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.207 [2024-11-08 07:37:15.914283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.207 Running I/O for 5 seconds... 00:09:58.207 [2024-11-08 07:37:15.930270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.207 [2024-11-08 07:37:15.930305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.207 [2024-11-08 07:37:15.941310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.207 [2024-11-08 07:37:15.941342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.207 [2024-11-08 07:37:15.956935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.207 [2024-11-08 07:37:15.957107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.208 [2024-11-08 07:37:15.976091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.208 [2024-11-08 07:37:15.976123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.208 [2024-11-08 07:37:15.991326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.208 [2024-11-08 07:37:15.991464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.208 [2024-11-08 07:37:16.006606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.208 [2024-11-08 07:37:16.006782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.208 [2024-11-08 07:37:16.022173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.208 [2024-11-08 07:37:16.022213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.208 [2024-11-08 07:37:16.038273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.208 [2024-11-08 07:37:16.038311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.208 [2024-11-08 07:37:16.052667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.208 [2024-11-08 07:37:16.052702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.208 [2024-11-08 07:37:16.063977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.208 [2024-11-08 07:37:16.064019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.208 [2024-11-08 07:37:16.078891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.208 [2024-11-08 07:37:16.078924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.208 [2024-11-08 07:37:16.094727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.208 [2024-11-08 07:37:16.094761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.208 [2024-11-08 07:37:16.109582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.208 [2024-11-08 07:37:16.109714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.208 [2024-11-08 07:37:16.125443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.208 [2024-11-08 07:37:16.125475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.208 [2024-11-08 07:37:16.140247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.208 [2024-11-08 07:37:16.140386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.208 [2024-11-08 07:37:16.156623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.208 [2024-11-08 07:37:16.156664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.171755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.171934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.187596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.187634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.202354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.202511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.218357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.218389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.231917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.231949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.246656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.246688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.262006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.262037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.276933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.276965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.292027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.292057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.307288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.307321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.322978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.323024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.337265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.337398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.348280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.348422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.363477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.363605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.379587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.379620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.393774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.393806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.404869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.404901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.467 [2024-11-08 07:37:16.420476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.467 [2024-11-08 07:37:16.420507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.435116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.435148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.449472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.449505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.460585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.460618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.476700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.476736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.492661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.492695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.508928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.508962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.520453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.520491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.535820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.535856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.551023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.551055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.565482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.565515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.581616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.581649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.595829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.595863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.607028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.607060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.622024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.622055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.638495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.638527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.654240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.654273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.668660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.668696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.726 [2024-11-08 07:37:16.679892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.726 [2024-11-08 07:37:16.679928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.695326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.695359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.710943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.710986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.726524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.726554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.742916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.742953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.754117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.754148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.769450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.769589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.786012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.786047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.797401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.797435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.812265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.812417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.828851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.828884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.840141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.840174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.855359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.855517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.871462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.871498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.887927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.887963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.902130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.902166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.913043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.913077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 15417.00 IOPS, 120.45 MiB/s [2024-11-08T07:37:16.947Z] [2024-11-08 07:37:16.928688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.986 [2024-11-08 07:37:16.928833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.986 [2024-11-08 07:37:16.943948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:16.944099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:16.958658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:16.958695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:16.969859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:16.969894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:16.985285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:16.985318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:17.001280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:17.001314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:17.012895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:17.012930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:17.028358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:17.028391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:17.044154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:17.044186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:17.058253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:17.058284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:17.073809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:17.073838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:17.089242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:17.089276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:17.104628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:17.104662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:17.120550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:17.120583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:17.134436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:17.134468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:17.149659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:17.149691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:17.165870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:17.165902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:17.177071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:17.177102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.245 [2024-11-08 07:37:17.191898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.245 [2024-11-08 07:37:17.192060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.204102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.204134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.218197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.218246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.233540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.233572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.249720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.249758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.266015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.266049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.277183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.277215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.292081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.292111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.307969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.308137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.322569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.322601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.333439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.333473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.348923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.348956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.364798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.364831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.380267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.380303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.395155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.395318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.410675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.410808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.426772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.426805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.441026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.441058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.504 [2024-11-08 07:37:17.455714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.504 [2024-11-08 07:37:17.455748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.471202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.471238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.485859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.486057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.501363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.501525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.518150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.518182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.533578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.533612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.547800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.547832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.561989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.562020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.577634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.577665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.593116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.593148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.607359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.607390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.622184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.622218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.637294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.637328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.652053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.652088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.668216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.668250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.682739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.682771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.699283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.699419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.763 [2024-11-08 07:37:17.715324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.763 [2024-11-08 07:37:17.715356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.021 [2024-11-08 07:37:17.726678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.021 [2024-11-08 07:37:17.726809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.021 [2024-11-08 07:37:17.741873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.021 [2024-11-08 07:37:17.742023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.021 [2024-11-08 07:37:17.758197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.021 [2024-11-08 07:37:17.758231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.021 [2024-11-08 07:37:17.774286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.774319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-11-08 07:37:17.785417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.785552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-11-08 07:37:17.800000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.800041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-11-08 07:37:17.810985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.811037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-11-08 07:37:17.825878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.825915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-11-08 07:37:17.842090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.842123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-11-08 07:37:17.853292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.853427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-11-08 07:37:17.868160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.868305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-11-08 07:37:17.884290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.884322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-11-08 07:37:17.898547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.898579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-11-08 07:37:17.912786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.912824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 15591.50 IOPS, 121.81 MiB/s [2024-11-08T07:37:17.983Z] [2024-11-08 07:37:17.923878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.924053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-11-08 07:37:17.939273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.939415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-11-08 07:37:17.954941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.955083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-11-08 07:37:17.970065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-11-08 07:37:17.970095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:17.985933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:17.986090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.001078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.001109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.017678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.017711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.034763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.034800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.050349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.050384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.061454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.061487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.076373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.076508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.091935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.092082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.106568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.106598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.117762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.117793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.133405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.133443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.149582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.149618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.165298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.165335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.179722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.179867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.191176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.191209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.206552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.206686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.280 [2024-11-08 07:37:18.222340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.280 [2024-11-08 07:37:18.222373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.281 [2024-11-08 07:37:18.236677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.281 [2024-11-08 07:37:18.236716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.539 [2024-11-08 07:37:18.253020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.539 [2024-11-08 07:37:18.253063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.539 [2024-11-08 07:37:18.264342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.264514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.279904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.280063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.296192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.296224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.307215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.307344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.322727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.322855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.338729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.338856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.354383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.354545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.369505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.369633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.384299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.384461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.399925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.400123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.415631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.415800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.431459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.431620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.446100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.446226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.461839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.461989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.476344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.476472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.540 [2024-11-08 07:37:18.487266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.540 [2024-11-08 07:37:18.487391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.502735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.502860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.518358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.518508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.532804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.532932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.547722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.547853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.558210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.558339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.573396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.573522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.589904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.590045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.605492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.605632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.620030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.620184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.631335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.631461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.646345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.646491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.662868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.663006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.677360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.677392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.688436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.688565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.704109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.704141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.718822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.718855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.733171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.733201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.799 [2024-11-08 07:37:18.744276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.799 [2024-11-08 07:37:18.744312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.759809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.759987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.776202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.776235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.790666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.790699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.805185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.805214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.821312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.821349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.835392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.835524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.846402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.846538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.862086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.862119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.877383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.877533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.891604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.891639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.906831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.906866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 15591.33 IOPS, 121.81 MiB/s [2024-11-08T07:37:19.019Z] [2024-11-08 07:37:18.922552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.922585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.936846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.936995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.948039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.948070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.963283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.963413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.979510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.979544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:18.990574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:18.990708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.058 [2024-11-08 07:37:19.006093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.058 [2024-11-08 07:37:19.006126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.021095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.021128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.035437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.035470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.049811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.049844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.065855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.065888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.079917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.079950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.094934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.094965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.110585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.110618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.124965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.125042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.136152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.136289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.151729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.151863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.167686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.167719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.181906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.181939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.193171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.193201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.209616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.209663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.224877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.224912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.240324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.240359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.255692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.255725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.318 [2024-11-08 07:37:19.270683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.318 [2024-11-08 07:37:19.270822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.287167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.287201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.303352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.303388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.314669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.314704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.329881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.330035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.345836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.345990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.360132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.360164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.374871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.374902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.385367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.385495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.400253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.400397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.416060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.416092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.430311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.430343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.445243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.445276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.456210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.456244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.471628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.471779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.488055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.488181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.503862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.504014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.519037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.519162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.578 [2024-11-08 07:37:19.535026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.578 [2024-11-08 07:37:19.535153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.549974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.550108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.561753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.561884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.578163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.578306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.593548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.593696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.607936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.608113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.623491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.623524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.638705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.638739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.654208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.654240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.669294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.669327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.683533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.683568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.695104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.695138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.710237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.710271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.726440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.726490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.740887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.741037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.756382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.756512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.771808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.771951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.837 [2024-11-08 07:37:19.786699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.837 [2024-11-08 07:37:19.786732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.094 [2024-11-08 07:37:19.797981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:19.798023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:19.813410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:19.813441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:19.829207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:19.829239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:19.845010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:19.845047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:19.860768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:19.860806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:19.877107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:19.877140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:19.892994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:19.893025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:19.907248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:19.907387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 15597.25 IOPS, 121.85 MiB/s [2024-11-08T07:37:20.056Z] [2024-11-08 07:37:19.917837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:19.917867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:19.932320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:19.932353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:19.943410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:19.943443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:19.958778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:19.958919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:19.974409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:19.974449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:19.986244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:19.986277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:20.001068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:20.001100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:20.012641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:20.012838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:20.028299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:20.028340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.095 [2024-11-08 07:37:20.043617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.095 [2024-11-08 07:37:20.043762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-11-08 07:37:20.057908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-11-08 07:37:20.057943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-11-08 07:37:20.073342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-11-08 07:37:20.073375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-11-08 07:37:20.089526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-11-08 07:37:20.089562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-11-08 07:37:20.105842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-11-08 07:37:20.105877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-11-08 07:37:20.121917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-11-08 07:37:20.121958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-11-08 07:37:20.136009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-11-08 07:37:20.136047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-11-08 07:37:20.151226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-11-08 07:37:20.151264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-11-08 07:37:20.166665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-11-08 07:37:20.166807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-11-08 07:37:20.181221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-11-08 07:37:20.181352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-11-08 07:37:20.196570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-11-08 07:37:20.196698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-11-08 07:37:20.211840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-11-08 07:37:20.211968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-11-08 07:37:20.229655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-11-08 07:37:20.229688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-11-08 07:37:20.244419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-11-08 07:37:20.244450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.354 [2024-11-08 07:37:20.255193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.354 [2024-11-08 07:37:20.255339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.354 [2024-11-08 07:37:20.270453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.354 [2024-11-08 07:37:20.270619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.354 [2024-11-08 07:37:20.286208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.354 [2024-11-08 07:37:20.286236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.354 [2024-11-08 07:37:20.300670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.354 [2024-11-08 07:37:20.300697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.316790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.316823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.327656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.327690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.342625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.342659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.358779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.358818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.373131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.373346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.384507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.384659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.400230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.400264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.415593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.415729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.431138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.431173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.446647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.446854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.461745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.461955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.478421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.478484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.494234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.494268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.509637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.509776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.525549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.525582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.540183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.540318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.556358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.556390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.648 [2024-11-08 07:37:20.570715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.648 [2024-11-08 07:37:20.570748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-11-08 07:37:20.582214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-11-08 07:37:20.582246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-11-08 07:37:20.597180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-11-08 07:37:20.597221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-11-08 07:37:20.612990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-11-08 07:37:20.613037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-11-08 07:37:20.627636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-11-08 07:37:20.627875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-11-08 07:37:20.639248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-11-08 07:37:20.639291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-11-08 07:37:20.654630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-11-08 07:37:20.654785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-11-08 07:37:20.670949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-11-08 07:37:20.670998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-11-08 07:37:20.687412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-11-08 07:37:20.687447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-11-08 07:37:20.703813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-11-08 07:37:20.703847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-11-08 07:37:20.719787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-11-08 07:37:20.719821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-11-08 07:37:20.734098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-11-08 07:37:20.734130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-11-08 07:37:20.745045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-11-08 07:37:20.745077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-11-08 07:37:20.760139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-11-08 07:37:20.760276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.940 [2024-11-08 07:37:20.776296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.940 [2024-11-08 07:37:20.776330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.940 [2024-11-08 07:37:20.787471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.940 [2024-11-08 07:37:20.787607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.940 [2024-11-08 07:37:20.802728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.940 [2024-11-08 07:37:20.802859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.940 [2024-11-08 07:37:20.818566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.940 [2024-11-08 07:37:20.818599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.940 [2024-11-08 07:37:20.832766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.940 [2024-11-08 07:37:20.832798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.940 [2024-11-08 07:37:20.843999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.940 [2024-11-08 07:37:20.844031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.940 [2024-11-08 07:37:20.859170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.940 [2024-11-08 07:37:20.859201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.940 [2024-11-08 07:37:20.875058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.940 [2024-11-08 07:37:20.875090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.940 [2024-11-08 07:37:20.889874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.940 [2024-11-08 07:37:20.890036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:20.907104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:20.907135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 15587.20 IOPS, 121.78 MiB/s [2024-11-08T07:37:21.159Z] [2024-11-08 07:37:20.922654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:20.922691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 00:10:03.198 Latency(us) 00:10:03.198 [2024-11-08T07:37:21.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.198 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:03.198 Nvme1n1 : 5.01 15584.20 121.75 0.00 0.00 8204.33 2746.27 18724.57 00:10:03.198 [2024-11-08T07:37:21.159Z] =================================================================================================================== 00:10:03.198 [2024-11-08T07:37:21.159Z] Total : 15584.20 121.75 0.00 0.00 8204.33 2746.27 18724.57 00:10:03.198 [2024-11-08 07:37:20.931803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:20.931836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:20.943786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:20.943817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:20.955797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:20.955831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:20.967784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:20.968017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:20.979806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:20.979840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:20.991804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:20.991836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:21.003805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:21.003834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:21.015797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:21.015822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:21.027798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:21.027823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:21.039833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:21.040064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:21.051839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:21.051868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:21.063845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:21.063879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:21.075825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:21.075848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:21.087833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:21.087856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-11-08 07:37:21.099838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-11-08 07:37:21.099864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65261) - No such process 00:10:03.198 07:37:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65261 00:10:03.198 07:37:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.198 07:37:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.198 07:37:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.198 07:37:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.198 07:37:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:03.198 07:37:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.198 07:37:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.198 delay0 00:10:03.198 07:37:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.198 07:37:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:03.198 07:37:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.198 07:37:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.198 07:37:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.198 07:37:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:03.457 [2024-11-08 07:37:21.314575] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:10.022 Initializing NVMe Controllers 00:10:10.022 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:10.022 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:10.022 Initialization complete. Launching workers. 00:10:10.022 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 273, failed: 14818 00:10:10.022 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15004, failed to submit 87 00:10:10.022 success 14938, unsuccessful 66, failed 0 00:10:10.022 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:10.022 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:10.022 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:10.022 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:10.022 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.022 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:10.022 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.022 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.022 rmmod nvme_tcp 00:10:10.022 rmmod nvme_fabrics 00:10:10.022 rmmod nvme_keyring 00:10:10.022 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.022 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65118 ']' 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65118 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 65118 ']' 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 65118 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65118 00:10:10.023 killing process with pid 65118 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65118' 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 65118 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 65118 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:10.023 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:10.282 07:37:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:10.282 00:10:10.282 real 0m24.350s 00:10:10.282 user 0m38.598s 00:10:10.282 sys 0m8.256s 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.282 ************************************ 00:10:10.282 END TEST nvmf_zcopy 00:10:10.282 ************************************ 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.282 ************************************ 00:10:10.282 START TEST nvmf_nmic 00:10:10.282 ************************************ 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:10.282 * Looking for test storage... 00:10:10.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:10:10.282 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:10.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.542 --rc genhtml_branch_coverage=1 00:10:10.542 --rc genhtml_function_coverage=1 00:10:10.542 --rc genhtml_legend=1 00:10:10.542 --rc geninfo_all_blocks=1 00:10:10.542 --rc geninfo_unexecuted_blocks=1 00:10:10.542 00:10:10.542 ' 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:10.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.542 --rc genhtml_branch_coverage=1 00:10:10.542 --rc genhtml_function_coverage=1 00:10:10.542 --rc genhtml_legend=1 00:10:10.542 --rc geninfo_all_blocks=1 00:10:10.542 --rc geninfo_unexecuted_blocks=1 00:10:10.542 00:10:10.542 ' 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:10.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.542 --rc genhtml_branch_coverage=1 00:10:10.542 --rc genhtml_function_coverage=1 00:10:10.542 --rc genhtml_legend=1 00:10:10.542 --rc geninfo_all_blocks=1 00:10:10.542 --rc geninfo_unexecuted_blocks=1 00:10:10.542 00:10:10.542 ' 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:10.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.542 --rc genhtml_branch_coverage=1 00:10:10.542 --rc genhtml_function_coverage=1 00:10:10.542 --rc genhtml_legend=1 00:10:10.542 --rc geninfo_all_blocks=1 00:10:10.542 --rc geninfo_unexecuted_blocks=1 00:10:10.542 00:10:10.542 ' 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.542 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.543 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:10.543 Cannot find device "nvmf_init_br" 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:10.543 Cannot find device "nvmf_init_br2" 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:10.543 Cannot find device "nvmf_tgt_br" 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:10.543 Cannot find device "nvmf_tgt_br2" 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:10.543 Cannot find device "nvmf_init_br" 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:10.543 Cannot find device "nvmf_init_br2" 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:10.543 Cannot find device "nvmf_tgt_br" 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:10.543 Cannot find device "nvmf_tgt_br2" 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:10.543 Cannot find device "nvmf_br" 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:10.543 Cannot find device "nvmf_init_if" 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:10.543 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:10.802 Cannot find device "nvmf_init_if2" 00:10:10.802 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:10.802 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:10.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.802 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:10.802 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:10.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.802 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:10.803 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:11.062 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:11.062 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:10:11.062 00:10:11.062 --- 10.0.0.3 ping statistics --- 00:10:11.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.062 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:11.062 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:11.062 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:10:11.062 00:10:11.062 --- 10.0.0.4 ping statistics --- 00:10:11.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.062 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:11.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:11.062 00:10:11.062 --- 10.0.0.1 ping statistics --- 00:10:11.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.062 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:11.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:10:11.062 00:10:11.062 --- 10.0.0.2 ping statistics --- 00:10:11.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.062 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65635 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65635 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 65635 ']' 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:11.062 07:37:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.062 [2024-11-08 07:37:28.888475] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:10:11.062 [2024-11-08 07:37:28.888554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.321 [2024-11-08 07:37:29.040470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.321 [2024-11-08 07:37:29.106631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.321 [2024-11-08 07:37:29.106902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.321 [2024-11-08 07:37:29.107128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.321 [2024-11-08 07:37:29.107208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.321 [2024-11-08 07:37:29.107378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.321 [2024-11-08 07:37:29.108566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.321 [2024-11-08 07:37:29.108778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.321 [2024-11-08 07:37:29.108866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.321 [2024-11-08 07:37:29.108867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.321 [2024-11-08 07:37:29.157382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:11.321 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:11.321 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:10:11.321 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.321 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:11.321 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.321 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.321 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.321 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.321 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.321 [2024-11-08 07:37:29.267076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.321 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.321 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:11.321 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.321 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.581 Malloc0 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.581 [2024-11-08 07:37:29.332998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:11.581 test case1: single bdev can't be used in multiple subsystems 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.581 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.581 [2024-11-08 07:37:29.356732] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:11.581 [2024-11-08 07:37:29.356927] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:11.581 [2024-11-08 07:37:29.356948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.581 request: 00:10:11.581 { 00:10:11.581 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:11.581 "namespace": { 00:10:11.581 "bdev_name": "Malloc0", 00:10:11.581 "no_auto_visible": false 00:10:11.581 }, 00:10:11.581 "method": "nvmf_subsystem_add_ns", 00:10:11.581 "req_id": 1 00:10:11.581 } 00:10:11.581 Got JSON-RPC error response 00:10:11.581 response: 00:10:11.581 { 00:10:11.581 "code": -32602, 00:10:11.582 "message": "Invalid parameters" 00:10:11.582 } 00:10:11.582 Adding namespace failed - expected result. 00:10:11.582 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:11.582 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:11.582 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:11.582 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:11.582 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:11.582 test case2: host connect to nvmf target in multiple paths 00:10:11.582 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:11.582 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.582 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.582 [2024-11-08 07:37:29.368943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:11.582 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.582 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid=b4f53fcb-853f-493d-bd98-9a37948dacaf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:11.582 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid=b4f53fcb-853f-493d-bd98-9a37948dacaf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:11.846 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.846 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:10:11.846 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.846 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:11.846 07:37:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:10:13.748 07:37:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:13.748 07:37:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:13.748 07:37:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:13.748 07:37:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:13.748 07:37:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:13.748 07:37:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:10:13.748 07:37:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:13.748 [global] 00:10:13.748 thread=1 00:10:13.748 invalidate=1 00:10:13.748 rw=write 00:10:13.748 time_based=1 00:10:13.748 runtime=1 00:10:13.748 ioengine=libaio 00:10:13.748 direct=1 00:10:13.748 bs=4096 00:10:13.748 iodepth=1 00:10:13.748 norandommap=0 00:10:13.748 numjobs=1 00:10:13.748 00:10:13.748 verify_dump=1 00:10:13.748 verify_backlog=512 00:10:13.748 verify_state_save=0 00:10:13.748 do_verify=1 00:10:13.748 verify=crc32c-intel 00:10:13.748 [job0] 00:10:13.748 filename=/dev/nvme0n1 00:10:14.007 Could not set queue depth (nvme0n1) 00:10:14.007 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.007 fio-3.35 00:10:14.007 Starting 1 thread 00:10:15.384 00:10:15.384 job0: (groupid=0, jobs=1): err= 0: pid=65719: Fri Nov 8 07:37:32 2024 00:10:15.384 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:15.384 slat (nsec): min=7562, max=39387, avg=9913.56, stdev=2889.70 00:10:15.384 clat (usec): min=109, max=284, avg=150.83, stdev=16.73 00:10:15.384 lat (usec): min=120, max=296, avg=160.75, stdev=17.39 00:10:15.384 clat percentiles (usec): 00:10:15.384 | 1.00th=[ 117], 5.00th=[ 124], 10.00th=[ 130], 20.00th=[ 139], 00:10:15.384 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:10:15.384 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 180], 00:10:15.384 | 99.00th=[ 194], 99.50th=[ 204], 99.90th=[ 249], 99.95th=[ 277], 00:10:15.384 | 99.99th=[ 285] 00:10:15.384 write: IOPS=3927, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1001msec); 0 zone resets 00:10:15.384 slat (usec): min=11, max=122, avg=15.62, stdev= 5.30 00:10:15.384 clat (usec): min=66, max=227, avg=90.13, stdev=12.32 00:10:15.384 lat (usec): min=79, max=349, avg=105.75, stdev=14.44 00:10:15.384 clat percentiles (usec): 00:10:15.384 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 76], 20.00th=[ 80], 00:10:15.384 | 30.00th=[ 83], 40.00th=[ 87], 50.00th=[ 90], 60.00th=[ 93], 00:10:15.385 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 106], 95.00th=[ 113], 00:10:15.385 | 99.00th=[ 125], 99.50th=[ 130], 99.90th=[ 145], 99.95th=[ 157], 00:10:15.385 | 99.99th=[ 227] 00:10:15.385 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:10:15.385 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:15.385 lat (usec) : 100=42.83%, 250=57.13%, 500=0.04% 00:10:15.385 cpu : usr=2.60%, sys=7.50%, ctx=7515, majf=0, minf=5 00:10:15.385 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.385 issued rwts: total=3584,3931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.385 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.385 00:10:15.385 Run status group 0 (all jobs): 00:10:15.385 READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:10:15.385 WRITE: bw=15.3MiB/s (16.1MB/s), 15.3MiB/s-15.3MiB/s (16.1MB/s-16.1MB/s), io=15.4MiB (16.1MB), run=1001-1001msec 00:10:15.385 00:10:15.385 Disk stats (read/write): 00:10:15.385 nvme0n1: ios=3219/3584, merge=0/0, ticks=496/347, in_queue=843, util=91.37% 00:10:15.385 07:37:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.385 rmmod nvme_tcp 00:10:15.385 rmmod nvme_fabrics 00:10:15.385 rmmod nvme_keyring 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65635 ']' 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65635 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 65635 ']' 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 65635 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65635 00:10:15.385 killing process with pid 65635 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65635' 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 65635 00:10:15.385 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 65635 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:15.644 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:15.956 ************************************ 00:10:15.956 END TEST nvmf_nmic 00:10:15.956 ************************************ 00:10:15.956 00:10:15.956 real 0m5.541s 00:10:15.956 user 0m15.583s 00:10:15.956 sys 0m2.832s 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.956 ************************************ 00:10:15.956 START TEST nvmf_fio_target 00:10:15.956 ************************************ 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:15.956 * Looking for test storage... 00:10:15.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:15.956 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:16.233 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:16.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.234 --rc genhtml_branch_coverage=1 00:10:16.234 --rc genhtml_function_coverage=1 00:10:16.234 --rc genhtml_legend=1 00:10:16.234 --rc geninfo_all_blocks=1 00:10:16.234 --rc geninfo_unexecuted_blocks=1 00:10:16.234 00:10:16.234 ' 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:16.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.234 --rc genhtml_branch_coverage=1 00:10:16.234 --rc genhtml_function_coverage=1 00:10:16.234 --rc genhtml_legend=1 00:10:16.234 --rc geninfo_all_blocks=1 00:10:16.234 --rc geninfo_unexecuted_blocks=1 00:10:16.234 00:10:16.234 ' 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:16.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.234 --rc genhtml_branch_coverage=1 00:10:16.234 --rc genhtml_function_coverage=1 00:10:16.234 --rc genhtml_legend=1 00:10:16.234 --rc geninfo_all_blocks=1 00:10:16.234 --rc geninfo_unexecuted_blocks=1 00:10:16.234 00:10:16.234 ' 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:16.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.234 --rc genhtml_branch_coverage=1 00:10:16.234 --rc genhtml_function_coverage=1 00:10:16.234 --rc genhtml_legend=1 00:10:16.234 --rc geninfo_all_blocks=1 00:10:16.234 --rc geninfo_unexecuted_blocks=1 00:10:16.234 00:10:16.234 ' 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:16.234 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:16.234 07:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:16.234 Cannot find device "nvmf_init_br" 00:10:16.234 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:16.235 Cannot find device "nvmf_init_br2" 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:16.235 Cannot find device "nvmf_tgt_br" 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.235 Cannot find device "nvmf_tgt_br2" 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:16.235 Cannot find device "nvmf_init_br" 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:16.235 Cannot find device "nvmf_init_br2" 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:16.235 Cannot find device "nvmf_tgt_br" 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:16.235 Cannot find device "nvmf_tgt_br2" 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:16.235 Cannot find device "nvmf_br" 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:16.235 Cannot find device "nvmf_init_if" 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:16.235 Cannot find device "nvmf_init_if2" 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:16.235 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:16.494 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:16.495 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:16.495 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:16.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:16.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:10:16.495 00:10:16.495 --- 10.0.0.3 ping statistics --- 00:10:16.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.495 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:16.495 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:16.495 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:16.495 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:10:16.495 00:10:16.495 --- 10.0.0.4 ping statistics --- 00:10:16.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.495 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:10:16.495 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:16.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:16.495 00:10:16.495 --- 10.0.0.1 ping statistics --- 00:10:16.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.495 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:16.495 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:16.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:10:16.495 00:10:16.495 --- 10.0.0.2 ping statistics --- 00:10:16.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.495 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=65951 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 65951 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 65951 ']' 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:16.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:16.754 07:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.754 [2024-11-08 07:37:34.531224] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:10:16.754 [2024-11-08 07:37:34.531303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.754 [2024-11-08 07:37:34.682109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.013 [2024-11-08 07:37:34.739122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.013 [2024-11-08 07:37:34.739186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.013 [2024-11-08 07:37:34.739202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.013 [2024-11-08 07:37:34.739215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.013 [2024-11-08 07:37:34.739226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.013 [2024-11-08 07:37:34.740369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.013 [2024-11-08 07:37:34.740516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.013 [2024-11-08 07:37:34.741547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.013 [2024-11-08 07:37:34.741550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.013 [2024-11-08 07:37:34.789356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:17.579 07:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:17.579 07:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:10:17.579 07:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:17.579 07:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:17.579 07:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.579 07:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.579 07:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:17.836 [2024-11-08 07:37:35.747678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.836 07:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.400 07:37:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:18.400 07:37:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.657 07:37:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:18.657 07:37:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.916 07:37:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:18.916 07:37:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.483 07:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:19.483 07:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:19.483 07:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.742 07:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:19.742 07:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.001 07:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:20.001 07:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.260 07:37:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:20.260 07:37:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:20.518 07:37:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:20.777 07:37:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:20.777 07:37:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:21.036 07:37:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:21.036 07:37:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:21.036 07:37:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:21.294 [2024-11-08 07:37:39.144424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:21.294 07:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:21.553 07:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:21.811 07:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid=b4f53fcb-853f-493d-bd98-9a37948dacaf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:21.811 07:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:21.811 07:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:10:21.811 07:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:21.811 07:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:10:21.811 07:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:10:21.811 07:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:10:24.344 07:37:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:24.344 07:37:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:24.344 07:37:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.344 07:37:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:10:24.344 07:37:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.344 07:37:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:10:24.344 07:37:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:24.344 [global] 00:10:24.344 thread=1 00:10:24.344 invalidate=1 00:10:24.344 rw=write 00:10:24.344 time_based=1 00:10:24.344 runtime=1 00:10:24.344 ioengine=libaio 00:10:24.344 direct=1 00:10:24.344 bs=4096 00:10:24.344 iodepth=1 00:10:24.344 norandommap=0 00:10:24.344 numjobs=1 00:10:24.344 00:10:24.344 verify_dump=1 00:10:24.344 verify_backlog=512 00:10:24.344 verify_state_save=0 00:10:24.344 do_verify=1 00:10:24.344 verify=crc32c-intel 00:10:24.344 [job0] 00:10:24.344 filename=/dev/nvme0n1 00:10:24.344 [job1] 00:10:24.344 filename=/dev/nvme0n2 00:10:24.344 [job2] 00:10:24.344 filename=/dev/nvme0n3 00:10:24.344 [job3] 00:10:24.344 filename=/dev/nvme0n4 00:10:24.344 Could not set queue depth (nvme0n1) 00:10:24.344 Could not set queue depth (nvme0n2) 00:10:24.344 Could not set queue depth (nvme0n3) 00:10:24.344 Could not set queue depth (nvme0n4) 00:10:24.344 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.344 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.344 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.344 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.344 fio-3.35 00:10:24.344 Starting 4 threads 00:10:25.279 00:10:25.279 job0: (groupid=0, jobs=1): err= 0: pid=66142: Fri Nov 8 07:37:43 2024 00:10:25.279 read: IOPS=1955, BW=7820KiB/s (8008kB/s)(7828KiB/1001msec) 00:10:25.279 slat (nsec): min=7729, max=33528, avg=12962.99, stdev=2795.03 00:10:25.279 clat (usec): min=154, max=3201, avg=276.93, stdev=77.75 00:10:25.279 lat (usec): min=167, max=3215, avg=289.89, stdev=77.98 00:10:25.279 clat percentiles (usec): 00:10:25.279 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 245], 00:10:25.279 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:10:25.279 | 70.00th=[ 281], 80.00th=[ 310], 90.00th=[ 338], 95.00th=[ 351], 00:10:25.279 | 99.00th=[ 396], 99.50th=[ 474], 99.90th=[ 545], 99.95th=[ 3195], 00:10:25.279 | 99.99th=[ 3195] 00:10:25.279 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:25.279 slat (usec): min=11, max=301, avg=21.58, stdev= 9.35 00:10:25.279 clat (usec): min=96, max=2701, avg=186.61, stdev=75.68 00:10:25.279 lat (usec): min=116, max=2720, avg=208.20, stdev=78.14 00:10:25.279 clat percentiles (usec): 00:10:25.279 | 1.00th=[ 103], 5.00th=[ 111], 10.00th=[ 117], 20.00th=[ 145], 00:10:25.279 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:10:25.279 | 70.00th=[ 194], 80.00th=[ 204], 90.00th=[ 281], 95.00th=[ 297], 00:10:25.279 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 347], 99.95th=[ 359], 00:10:25.279 | 99.99th=[ 2704] 00:10:25.279 bw ( KiB/s): min= 8192, max= 8192, per=17.64%, avg=8192.00, stdev= 0.00, samples=1 00:10:25.279 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:25.279 lat (usec) : 100=0.12%, 250=58.18%, 500=41.55%, 750=0.10% 00:10:25.279 lat (msec) : 4=0.05% 00:10:25.279 cpu : usr=1.40%, sys=5.80%, ctx=4007, majf=0, minf=15 00:10:25.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.279 issued rwts: total=1957,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.279 job1: (groupid=0, jobs=1): err= 0: pid=66143: Fri Nov 8 07:37:43 2024 00:10:25.279 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:25.279 slat (nsec): min=7213, max=42119, avg=9678.45, stdev=2407.56 00:10:25.279 clat (usec): min=114, max=523, avg=144.42, stdev=14.02 00:10:25.279 lat (usec): min=122, max=531, avg=154.10, stdev=14.86 00:10:25.279 clat percentiles (usec): 00:10:25.279 | 1.00th=[ 122], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 135], 00:10:25.279 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 147], 00:10:25.279 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 167], 00:10:25.279 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 217], 99.95th=[ 227], 00:10:25.279 | 99.99th=[ 523] 00:10:25.279 write: IOPS=3725, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1001msec); 0 zone resets 00:10:25.279 slat (usec): min=8, max=140, avg=15.27, stdev= 6.11 00:10:25.279 clat (usec): min=74, max=1593, avg=102.59, stdev=27.08 00:10:25.279 lat (usec): min=87, max=1606, avg=117.86, stdev=28.61 00:10:25.279 clat percentiles (usec): 00:10:25.279 | 1.00th=[ 82], 5.00th=[ 86], 10.00th=[ 89], 20.00th=[ 92], 00:10:25.279 | 30.00th=[ 95], 40.00th=[ 98], 50.00th=[ 101], 60.00th=[ 104], 00:10:25.280 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 118], 95.00th=[ 123], 00:10:25.280 | 99.00th=[ 135], 99.50th=[ 139], 99.90th=[ 159], 99.95th=[ 215], 00:10:25.280 | 99.99th=[ 1598] 00:10:25.280 bw ( KiB/s): min=16384, max=16384, per=35.29%, avg=16384.00, stdev= 0.00, samples=1 00:10:25.280 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:25.280 lat (usec) : 100=23.36%, 250=76.62%, 750=0.01% 00:10:25.280 lat (msec) : 2=0.01% 00:10:25.280 cpu : usr=2.50%, sys=7.20%, ctx=7317, majf=0, minf=19 00:10:25.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.280 issued rwts: total=3584,3729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.280 job2: (groupid=0, jobs=1): err= 0: pid=66144: Fri Nov 8 07:37:43 2024 00:10:25.280 read: IOPS=3111, BW=12.2MiB/s (12.7MB/s)(12.2MiB/1001msec) 00:10:25.280 slat (nsec): min=7590, max=54403, avg=10486.00, stdev=3342.28 00:10:25.280 clat (usec): min=129, max=413, avg=159.56, stdev=17.62 00:10:25.280 lat (usec): min=137, max=423, avg=170.05, stdev=18.50 00:10:25.280 clat percentiles (usec): 00:10:25.280 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:10:25.280 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:10:25.280 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 184], 00:10:25.280 | 99.00th=[ 202], 99.50th=[ 260], 99.90th=[ 359], 99.95th=[ 400], 00:10:25.280 | 99.99th=[ 416] 00:10:25.280 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:25.280 slat (usec): min=9, max=132, avg=15.71, stdev= 5.58 00:10:25.280 clat (usec): min=84, max=361, avg=113.34, stdev=13.06 00:10:25.280 lat (usec): min=97, max=373, avg=129.05, stdev=16.36 00:10:25.280 clat percentiles (usec): 00:10:25.280 | 1.00th=[ 92], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 103], 00:10:25.280 | 30.00th=[ 106], 40.00th=[ 110], 50.00th=[ 113], 60.00th=[ 116], 00:10:25.280 | 70.00th=[ 119], 80.00th=[ 124], 90.00th=[ 130], 95.00th=[ 135], 00:10:25.280 | 99.00th=[ 147], 99.50th=[ 155], 99.90th=[ 174], 99.95th=[ 231], 00:10:25.280 | 99.99th=[ 363] 00:10:25.280 bw ( KiB/s): min=12960, max=12960, per=27.91%, avg=12960.00, stdev= 0.00, samples=1 00:10:25.280 iops : min= 3240, max= 3240, avg=3240.00, stdev= 0.00, samples=1 00:10:25.280 lat (usec) : 100=7.00%, 250=92.73%, 500=0.27% 00:10:25.280 cpu : usr=2.10%, sys=7.30%, ctx=6699, majf=0, minf=5 00:10:25.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.280 issued rwts: total=3115,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.280 job3: (groupid=0, jobs=1): err= 0: pid=66145: Fri Nov 8 07:37:43 2024 00:10:25.280 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:25.280 slat (nsec): min=7689, max=27901, avg=8774.78, stdev=1664.16 00:10:25.280 clat (usec): min=142, max=641, avg=275.44, stdev=40.53 00:10:25.280 lat (usec): min=151, max=650, avg=284.22, stdev=40.62 00:10:25.280 clat percentiles (usec): 00:10:25.280 | 1.00th=[ 161], 5.00th=[ 231], 10.00th=[ 241], 20.00th=[ 249], 00:10:25.280 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:10:25.280 | 70.00th=[ 285], 80.00th=[ 310], 90.00th=[ 330], 95.00th=[ 347], 00:10:25.280 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 603], 99.95th=[ 619], 00:10:25.280 | 99.99th=[ 644] 00:10:25.280 write: IOPS=2255, BW=9023KiB/s (9240kB/s)(9032KiB/1001msec); 0 zone resets 00:10:25.280 slat (usec): min=11, max=105, avg=14.58, stdev= 5.97 00:10:25.280 clat (usec): min=96, max=278, avg=168.62, stdev=32.99 00:10:25.280 lat (usec): min=108, max=383, avg=183.20, stdev=34.82 00:10:25.280 clat percentiles (usec): 00:10:25.280 | 1.00th=[ 102], 5.00th=[ 110], 10.00th=[ 118], 20.00th=[ 128], 00:10:25.280 | 30.00th=[ 157], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 186], 00:10:25.280 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 208], 00:10:25.280 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 262], 99.95th=[ 265], 00:10:25.280 | 99.99th=[ 277] 00:10:25.280 bw ( KiB/s): min= 9328, max= 9328, per=20.09%, avg=9328.00, stdev= 0.00, samples=1 00:10:25.280 iops : min= 2332, max= 2332, avg=2332.00, stdev= 0.00, samples=1 00:10:25.280 lat (usec) : 100=0.30%, 250=62.49%, 500=37.11%, 750=0.09% 00:10:25.280 cpu : usr=1.20%, sys=4.10%, ctx=4306, majf=0, minf=9 00:10:25.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.280 issued rwts: total=2048,2258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.280 00:10:25.280 Run status group 0 (all jobs): 00:10:25.280 READ: bw=41.8MiB/s (43.8MB/s), 7820KiB/s-14.0MiB/s (8008kB/s-14.7MB/s), io=41.8MiB (43.8MB), run=1001-1001msec 00:10:25.280 WRITE: bw=45.3MiB/s (47.5MB/s), 8184KiB/s-14.6MiB/s (8380kB/s-15.3MB/s), io=45.4MiB (47.6MB), run=1001-1001msec 00:10:25.280 00:10:25.280 Disk stats (read/write): 00:10:25.280 nvme0n1: ios=1586/1879, merge=0/0, ticks=446/368, in_queue=814, util=85.77% 00:10:25.280 nvme0n2: ios=3066/3072, merge=0/0, ticks=460/330, in_queue=790, util=86.37% 00:10:25.280 nvme0n3: ios=2560/3070, merge=0/0, ticks=418/364, in_queue=782, util=88.77% 00:10:25.280 nvme0n4: ios=1625/2048, merge=0/0, ticks=454/361, in_queue=815, util=89.43% 00:10:25.280 07:37:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:25.280 [global] 00:10:25.280 thread=1 00:10:25.280 invalidate=1 00:10:25.280 rw=randwrite 00:10:25.280 time_based=1 00:10:25.280 runtime=1 00:10:25.280 ioengine=libaio 00:10:25.280 direct=1 00:10:25.280 bs=4096 00:10:25.280 iodepth=1 00:10:25.280 norandommap=0 00:10:25.280 numjobs=1 00:10:25.280 00:10:25.280 verify_dump=1 00:10:25.280 verify_backlog=512 00:10:25.280 verify_state_save=0 00:10:25.280 do_verify=1 00:10:25.280 verify=crc32c-intel 00:10:25.280 [job0] 00:10:25.280 filename=/dev/nvme0n1 00:10:25.280 [job1] 00:10:25.280 filename=/dev/nvme0n2 00:10:25.280 [job2] 00:10:25.280 filename=/dev/nvme0n3 00:10:25.280 [job3] 00:10:25.280 filename=/dev/nvme0n4 00:10:25.280 Could not set queue depth (nvme0n1) 00:10:25.280 Could not set queue depth (nvme0n2) 00:10:25.280 Could not set queue depth (nvme0n3) 00:10:25.280 Could not set queue depth (nvme0n4) 00:10:25.538 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.538 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.538 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.538 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.538 fio-3.35 00:10:25.538 Starting 4 threads 00:10:26.912 00:10:26.912 job0: (groupid=0, jobs=1): err= 0: pid=66198: Fri Nov 8 07:37:44 2024 00:10:26.912 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:26.912 slat (nsec): min=7389, max=31647, avg=8298.43, stdev=1459.12 00:10:26.912 clat (usec): min=108, max=549, avg=141.50, stdev=15.30 00:10:26.912 lat (usec): min=116, max=557, avg=149.80, stdev=15.53 00:10:26.912 clat percentiles (usec): 00:10:26.912 | 1.00th=[ 122], 5.00th=[ 126], 10.00th=[ 129], 20.00th=[ 133], 00:10:26.912 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 143], 00:10:26.912 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:10:26.912 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 198], 99.95th=[ 494], 00:10:26.912 | 99.99th=[ 553] 00:10:26.912 write: IOPS=4011, BW=15.7MiB/s (16.4MB/s)(15.7MiB/1001msec); 0 zone resets 00:10:26.912 slat (usec): min=8, max=112, avg=12.71, stdev= 3.60 00:10:26.912 clat (usec): min=68, max=3486, avg=100.80, stdev=69.27 00:10:26.912 lat (usec): min=79, max=3512, avg=113.51, stdev=70.04 00:10:26.912 clat percentiles (usec): 00:10:26.912 | 1.00th=[ 78], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 89], 00:10:26.912 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 99], 00:10:26.912 | 70.00th=[ 102], 80.00th=[ 106], 90.00th=[ 114], 95.00th=[ 120], 00:10:26.912 | 99.00th=[ 139], 99.50th=[ 147], 99.90th=[ 938], 99.95th=[ 1188], 00:10:26.912 | 99.99th=[ 3490] 00:10:26.912 bw ( KiB/s): min=16384, max=16384, per=33.79%, avg=16384.00, stdev= 0.00, samples=1 00:10:26.912 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:26.912 lat (usec) : 100=34.20%, 250=65.58%, 500=0.11%, 750=0.04%, 1000=0.04% 00:10:26.912 lat (msec) : 2=0.03%, 4=0.01% 00:10:26.912 cpu : usr=1.90%, sys=6.60%, ctx=7604, majf=0, minf=15 00:10:26.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.913 issued rwts: total=3584,4016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.913 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.913 job1: (groupid=0, jobs=1): err= 0: pid=66199: Fri Nov 8 07:37:44 2024 00:10:26.913 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:26.913 slat (nsec): min=7385, max=24606, avg=8252.06, stdev=1460.83 00:10:26.913 clat (usec): min=126, max=753, avg=253.43, stdev=33.38 00:10:26.913 lat (usec): min=134, max=772, avg=261.69, stdev=33.52 00:10:26.913 clat percentiles (usec): 00:10:26.913 | 1.00th=[ 149], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 235], 00:10:26.913 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 255], 00:10:26.913 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 318], 00:10:26.913 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 412], 99.95th=[ 519], 00:10:26.913 | 99.99th=[ 750] 00:10:26.913 write: IOPS=2342, BW=9371KiB/s (9596kB/s)(9380KiB/1001msec); 0 zone resets 00:10:26.913 slat (usec): min=11, max=104, avg=13.39, stdev= 4.77 00:10:26.913 clat (usec): min=83, max=2205, avg=182.67, stdev=66.00 00:10:26.913 lat (usec): min=95, max=2231, avg=196.06, stdev=66.86 00:10:26.913 clat percentiles (usec): 00:10:26.913 | 1.00th=[ 96], 5.00th=[ 112], 10.00th=[ 123], 20.00th=[ 169], 00:10:26.913 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:10:26.913 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 215], 00:10:26.913 | 99.00th=[ 285], 99.50th=[ 379], 99.90th=[ 938], 99.95th=[ 1696], 00:10:26.913 | 99.99th=[ 2212] 00:10:26.913 bw ( KiB/s): min= 9040, max= 9040, per=18.64%, avg=9040.00, stdev= 0.00, samples=1 00:10:26.913 iops : min= 2260, max= 2260, avg=2260.00, stdev= 0.00, samples=1 00:10:26.913 lat (usec) : 100=1.16%, 250=75.44%, 500=23.22%, 750=0.07%, 1000=0.07% 00:10:26.913 lat (msec) : 2=0.02%, 4=0.02% 00:10:26.913 cpu : usr=1.00%, sys=4.10%, ctx=4394, majf=0, minf=12 00:10:26.913 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.913 issued rwts: total=2048,2345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.913 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.913 job2: (groupid=0, jobs=1): err= 0: pid=66200: Fri Nov 8 07:37:44 2024 00:10:26.913 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:26.913 slat (nsec): min=7744, max=75286, avg=10558.31, stdev=4251.78 00:10:26.913 clat (usec): min=154, max=458, avg=253.44, stdev=29.68 00:10:26.913 lat (usec): min=162, max=479, avg=264.00, stdev=29.26 00:10:26.913 clat percentiles (usec): 00:10:26.913 | 1.00th=[ 208], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:10:26.913 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 253], 00:10:26.913 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 293], 95.00th=[ 322], 00:10:26.913 | 99.00th=[ 347], 99.50th=[ 367], 99.90th=[ 420], 99.95th=[ 424], 00:10:26.913 | 99.99th=[ 457] 00:10:26.913 write: IOPS=2186, BW=8747KiB/s (8957kB/s)(8756KiB/1001msec); 0 zone resets 00:10:26.913 slat (nsec): min=11375, max=92847, avg=16808.69, stdev=6165.08 00:10:26.913 clat (usec): min=99, max=5437, avg=190.78, stdev=196.10 00:10:26.913 lat (usec): min=112, max=5450, avg=207.59, stdev=196.58 00:10:26.913 clat percentiles (usec): 00:10:26.913 | 1.00th=[ 113], 5.00th=[ 125], 10.00th=[ 157], 20.00th=[ 169], 00:10:26.913 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:10:26.913 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 210], 00:10:26.913 | 99.00th=[ 255], 99.50th=[ 465], 99.90th=[ 3851], 99.95th=[ 4080], 00:10:26.913 | 99.99th=[ 5407] 00:10:26.913 bw ( KiB/s): min= 9280, max= 9280, per=19.14%, avg=9280.00, stdev= 0.00, samples=1 00:10:26.913 iops : min= 2320, max= 2320, avg=2320.00, stdev= 0.00, samples=1 00:10:26.913 lat (usec) : 100=0.05%, 250=77.60%, 500=22.14%, 750=0.07% 00:10:26.913 lat (msec) : 4=0.09%, 10=0.05% 00:10:26.913 cpu : usr=1.60%, sys=4.60%, ctx=4237, majf=0, minf=13 00:10:26.913 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.913 issued rwts: total=2048,2189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.913 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.913 job3: (groupid=0, jobs=1): err= 0: pid=66201: Fri Nov 8 07:37:44 2024 00:10:26.913 read: IOPS=3185, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1001msec) 00:10:26.913 slat (nsec): min=7574, max=26024, avg=9686.17, stdev=2390.15 00:10:26.913 clat (usec): min=129, max=2122, avg=159.58, stdev=39.94 00:10:26.913 lat (usec): min=137, max=2130, avg=169.27, stdev=40.33 00:10:26.913 clat percentiles (usec): 00:10:26.913 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:10:26.913 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:10:26.913 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 186], 00:10:26.913 | 99.00th=[ 202], 99.50th=[ 219], 99.90th=[ 441], 99.95th=[ 668], 00:10:26.913 | 99.99th=[ 2114] 00:10:26.913 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:26.913 slat (nsec): min=8971, max=89299, avg=14777.75, stdev=5430.01 00:10:26.913 clat (usec): min=84, max=432, avg=111.52, stdev=14.16 00:10:26.913 lat (usec): min=96, max=460, avg=126.30, stdev=17.23 00:10:26.913 clat percentiles (usec): 00:10:26.913 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 101], 00:10:26.913 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 110], 60.00th=[ 114], 00:10:26.913 | 70.00th=[ 117], 80.00th=[ 122], 90.00th=[ 129], 95.00th=[ 135], 00:10:26.913 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 233], 99.95th=[ 245], 00:10:26.913 | 99.99th=[ 433] 00:10:26.913 bw ( KiB/s): min=13576, max=13576, per=28.00%, avg=13576.00, stdev= 0.00, samples=1 00:10:26.913 iops : min= 3394, max= 3394, avg=3394.00, stdev= 0.00, samples=1 00:10:26.913 lat (usec) : 100=9.82%, 250=90.05%, 500=0.09%, 750=0.03% 00:10:26.913 lat (msec) : 4=0.01% 00:10:26.913 cpu : usr=2.30%, sys=6.60%, ctx=6774, majf=0, minf=7 00:10:26.913 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.913 issued rwts: total=3189,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.913 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.913 00:10:26.913 Run status group 0 (all jobs): 00:10:26.913 READ: bw=42.4MiB/s (44.5MB/s), 8184KiB/s-14.0MiB/s (8380kB/s-14.7MB/s), io=42.5MiB (44.5MB), run=1001-1001msec 00:10:26.913 WRITE: bw=47.4MiB/s (49.7MB/s), 8747KiB/s-15.7MiB/s (8957kB/s-16.4MB/s), io=47.4MiB (49.7MB), run=1001-1001msec 00:10:26.913 00:10:26.913 Disk stats (read/write): 00:10:26.913 nvme0n1: ios=3122/3329, merge=0/0, ticks=454/341, in_queue=795, util=86.57% 00:10:26.913 nvme0n2: ios=1696/2048, merge=0/0, ticks=426/378, in_queue=804, util=86.61% 00:10:26.913 nvme0n3: ios=1590/2048, merge=0/0, ticks=400/392, in_queue=792, util=88.47% 00:10:26.913 nvme0n4: ios=2649/3072, merge=0/0, ticks=427/366, in_queue=793, util=89.67% 00:10:26.913 07:37:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:26.913 [global] 00:10:26.913 thread=1 00:10:26.913 invalidate=1 00:10:26.913 rw=write 00:10:26.913 time_based=1 00:10:26.913 runtime=1 00:10:26.913 ioengine=libaio 00:10:26.913 direct=1 00:10:26.913 bs=4096 00:10:26.913 iodepth=128 00:10:26.913 norandommap=0 00:10:26.913 numjobs=1 00:10:26.913 00:10:26.913 verify_dump=1 00:10:26.913 verify_backlog=512 00:10:26.913 verify_state_save=0 00:10:26.913 do_verify=1 00:10:26.913 verify=crc32c-intel 00:10:26.913 [job0] 00:10:26.913 filename=/dev/nvme0n1 00:10:26.913 [job1] 00:10:26.913 filename=/dev/nvme0n2 00:10:26.913 [job2] 00:10:26.913 filename=/dev/nvme0n3 00:10:26.913 [job3] 00:10:26.913 filename=/dev/nvme0n4 00:10:26.913 Could not set queue depth (nvme0n1) 00:10:26.913 Could not set queue depth (nvme0n2) 00:10:26.913 Could not set queue depth (nvme0n3) 00:10:26.913 Could not set queue depth (nvme0n4) 00:10:26.913 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.913 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.913 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.913 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.913 fio-3.35 00:10:26.913 Starting 4 threads 00:10:28.370 00:10:28.370 job0: (groupid=0, jobs=1): err= 0: pid=66256: Fri Nov 8 07:37:45 2024 00:10:28.370 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:10:28.370 slat (usec): min=5, max=4611, avg=78.32, stdev=307.79 00:10:28.370 clat (usec): min=7691, max=15709, avg=10522.01, stdev=861.92 00:10:28.370 lat (usec): min=7726, max=15726, avg=10600.33, stdev=896.71 00:10:28.370 clat percentiles (usec): 00:10:28.370 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:10:28.370 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:10:28.370 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11600], 95.00th=[12125], 00:10:28.370 | 99.00th=[13566], 99.50th=[14091], 99.90th=[15139], 99.95th=[15664], 00:10:28.370 | 99.99th=[15664] 00:10:28.370 write: IOPS=6256, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1003msec); 0 zone resets 00:10:28.370 slat (usec): min=11, max=2810, avg=75.45, stdev=339.21 00:10:28.370 clat (usec): min=387, max=13374, avg=9926.07, stdev=973.53 00:10:28.370 lat (usec): min=3014, max=13394, avg=10001.52, stdev=1020.39 00:10:28.370 clat percentiles (usec): 00:10:28.370 | 1.00th=[ 6980], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:10:28.370 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:10:28.370 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[11600], 00:10:28.370 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13173], 99.95th=[13304], 00:10:28.370 | 99.99th=[13435] 00:10:28.370 bw ( KiB/s): min=24576, max=24672, per=34.66%, avg=24624.00, stdev=67.88, samples=2 00:10:28.370 iops : min= 6144, max= 6168, avg=6156.00, stdev=16.97, samples=2 00:10:28.370 lat (usec) : 500=0.01% 00:10:28.370 lat (msec) : 4=0.34%, 10=37.14%, 20=62.52% 00:10:28.370 cpu : usr=5.29%, sys=15.17%, ctx=493, majf=0, minf=5 00:10:28.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:28.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.370 issued rwts: total=6144,6275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.370 job1: (groupid=0, jobs=1): err= 0: pid=66257: Fri Nov 8 07:37:45 2024 00:10:28.370 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:10:28.370 slat (usec): min=8, max=10387, avg=143.22, stdev=674.46 00:10:28.370 clat (usec): min=9342, max=37648, avg=17091.70, stdev=4978.28 00:10:28.370 lat (usec): min=9359, max=37695, avg=17234.92, stdev=5035.24 00:10:28.370 clat percentiles (usec): 00:10:28.370 | 1.00th=[10683], 5.00th=[13173], 10.00th=[13960], 20.00th=[14353], 00:10:28.370 | 30.00th=[14615], 40.00th=[14746], 50.00th=[15270], 60.00th=[15664], 00:10:28.370 | 70.00th=[15926], 80.00th=[19792], 90.00th=[22938], 95.00th=[29754], 00:10:28.370 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36963], 99.95th=[37487], 00:10:28.370 | 99.99th=[37487] 00:10:28.370 write: IOPS=3156, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1003msec); 0 zone resets 00:10:28.370 slat (usec): min=11, max=8050, avg=166.72, stdev=657.94 00:10:28.370 clat (usec): min=2521, max=64366, avg=23505.50, stdev=13282.07 00:10:28.370 lat (usec): min=2549, max=64393, avg=23672.22, stdev=13375.78 00:10:28.370 clat percentiles (usec): 00:10:28.370 | 1.00th=[ 3589], 5.00th=[11076], 10.00th=[11731], 20.00th=[12387], 00:10:28.370 | 30.00th=[13698], 40.00th=[17171], 50.00th=[18482], 60.00th=[22152], 00:10:28.370 | 70.00th=[25297], 80.00th=[37487], 90.00th=[42730], 95.00th=[49546], 00:10:28.370 | 99.00th=[62653], 99.50th=[63177], 99.90th=[64226], 99.95th=[64226], 00:10:28.370 | 99.99th=[64226] 00:10:28.370 bw ( KiB/s): min=10376, max=14228, per=17.31%, avg=12302.00, stdev=2723.78, samples=2 00:10:28.370 iops : min= 2594, max= 3557, avg=3075.50, stdev=680.94, samples=2 00:10:28.370 lat (msec) : 4=0.64%, 10=1.65%, 20=63.95%, 50=31.39%, 100=2.37% 00:10:28.370 cpu : usr=3.19%, sys=10.78%, ctx=330, majf=0, minf=10 00:10:28.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:28.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.370 issued rwts: total=3072,3166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.370 job2: (groupid=0, jobs=1): err= 0: pid=66258: Fri Nov 8 07:37:45 2024 00:10:28.370 read: IOPS=5137, BW=20.1MiB/s (21.0MB/s)(20.1MiB/1003msec) 00:10:28.370 slat (usec): min=9, max=2876, avg=89.75, stdev=412.59 00:10:28.370 clat (usec): min=612, max=13105, avg=11960.26, stdev=919.15 00:10:28.370 lat (usec): min=2870, max=14022, avg=12050.02, stdev=823.79 00:10:28.370 clat percentiles (usec): 00:10:28.370 | 1.00th=[ 9372], 5.00th=[11338], 10.00th=[11469], 20.00th=[11731], 00:10:28.370 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:10:28.370 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12649], 95.00th=[12780], 00:10:28.370 | 99.00th=[13042], 99.50th=[13042], 99.90th=[13042], 99.95th=[13042], 00:10:28.370 | 99.99th=[13042] 00:10:28.370 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:28.370 slat (usec): min=8, max=4690, avg=87.77, stdev=372.38 00:10:28.370 clat (usec): min=5311, max=15025, avg=11545.60, stdev=877.79 00:10:28.370 lat (usec): min=5328, max=15047, avg=11633.38, stdev=802.64 00:10:28.370 clat percentiles (usec): 00:10:28.371 | 1.00th=[ 8717], 5.00th=[10683], 10.00th=[10945], 20.00th=[11207], 00:10:28.371 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11469], 60.00th=[11731], 00:10:28.371 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12518], 00:10:28.371 | 99.00th=[14484], 99.50th=[14877], 99.90th=[15008], 99.95th=[15008], 00:10:28.371 | 99.99th=[15008] 00:10:28.371 bw ( KiB/s): min=20569, max=23768, per=31.20%, avg=22168.50, stdev=2262.03, samples=2 00:10:28.371 iops : min= 5142, max= 5942, avg=5542.00, stdev=565.69, samples=2 00:10:28.371 lat (usec) : 750=0.01% 00:10:28.371 lat (msec) : 4=0.29%, 10=3.10%, 20=96.61% 00:10:28.371 cpu : usr=4.39%, sys=15.07%, ctx=396, majf=0, minf=9 00:10:28.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:28.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.371 issued rwts: total=5153,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.371 job3: (groupid=0, jobs=1): err= 0: pid=66259: Fri Nov 8 07:37:45 2024 00:10:28.371 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:10:28.371 slat (usec): min=8, max=6638, avg=144.13, stdev=598.88 00:10:28.371 clat (usec): min=12338, max=46337, avg=19729.12, stdev=6212.70 00:10:28.371 lat (usec): min=12361, max=47082, avg=19873.25, stdev=6262.56 00:10:28.371 clat percentiles (usec): 00:10:28.371 | 1.00th=[13304], 5.00th=[15139], 10.00th=[15401], 20.00th=[15664], 00:10:28.371 | 30.00th=[15795], 40.00th=[16057], 50.00th=[17695], 60.00th=[19006], 00:10:28.371 | 70.00th=[20841], 80.00th=[22676], 90.00th=[25560], 95.00th=[35914], 00:10:28.371 | 99.00th=[43779], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:10:28.371 | 99.99th=[46400] 00:10:28.371 write: IOPS=2764, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1005msec); 0 zone resets 00:10:28.371 slat (usec): min=11, max=12177, avg=217.78, stdev=839.29 00:10:28.371 clat (usec): min=2356, max=68962, avg=27161.04, stdev=12469.20 00:10:28.371 lat (usec): min=6071, max=68999, avg=27378.82, stdev=12547.66 00:10:28.371 clat percentiles (usec): 00:10:28.371 | 1.00th=[12125], 5.00th=[13042], 10.00th=[13173], 20.00th=[16581], 00:10:28.371 | 30.00th=[21103], 40.00th=[22152], 50.00th=[23200], 60.00th=[27395], 00:10:28.371 | 70.00th=[31327], 80.00th=[36963], 90.00th=[41681], 95.00th=[51643], 00:10:28.371 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:10:28.371 | 99.99th=[68682] 00:10:28.371 bw ( KiB/s): min= 9915, max=11304, per=14.93%, avg=10609.50, stdev=982.17, samples=2 00:10:28.371 iops : min= 2478, max= 2826, avg=2652.00, stdev=246.07, samples=2 00:10:28.371 lat (msec) : 4=0.02%, 10=0.15%, 20=45.37%, 50=51.67%, 100=2.79% 00:10:28.371 cpu : usr=2.89%, sys=9.56%, ctx=359, majf=0, minf=15 00:10:28.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:28.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.371 issued rwts: total=2560,2778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.371 00:10:28.371 Run status group 0 (all jobs): 00:10:28.371 READ: bw=65.8MiB/s (69.0MB/s), 9.95MiB/s-23.9MiB/s (10.4MB/s-25.1MB/s), io=66.1MiB (69.3MB), run=1003-1005msec 00:10:28.371 WRITE: bw=69.4MiB/s (72.8MB/s), 10.8MiB/s-24.4MiB/s (11.3MB/s-25.6MB/s), io=69.7MiB (73.1MB), run=1003-1005msec 00:10:28.371 00:10:28.371 Disk stats (read/write): 00:10:28.371 nvme0n1: ios=5170/5630, merge=0/0, ticks=17106/15403, in_queue=32509, util=88.16% 00:10:28.371 nvme0n2: ios=2609/2855, merge=0/0, ticks=21690/29288, in_queue=50978, util=88.89% 00:10:28.371 nvme0n3: ios=4636/4672, merge=0/0, ticks=12290/11533, in_queue=23823, util=89.92% 00:10:28.371 nvme0n4: ios=2048/2511, merge=0/0, ticks=12752/21816, in_queue=34568, util=89.66% 00:10:28.371 07:37:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:28.371 [global] 00:10:28.371 thread=1 00:10:28.371 invalidate=1 00:10:28.371 rw=randwrite 00:10:28.371 time_based=1 00:10:28.371 runtime=1 00:10:28.371 ioengine=libaio 00:10:28.371 direct=1 00:10:28.371 bs=4096 00:10:28.371 iodepth=128 00:10:28.371 norandommap=0 00:10:28.371 numjobs=1 00:10:28.371 00:10:28.371 verify_dump=1 00:10:28.371 verify_backlog=512 00:10:28.371 verify_state_save=0 00:10:28.371 do_verify=1 00:10:28.371 verify=crc32c-intel 00:10:28.371 [job0] 00:10:28.371 filename=/dev/nvme0n1 00:10:28.371 [job1] 00:10:28.371 filename=/dev/nvme0n2 00:10:28.371 [job2] 00:10:28.371 filename=/dev/nvme0n3 00:10:28.371 [job3] 00:10:28.371 filename=/dev/nvme0n4 00:10:28.371 Could not set queue depth (nvme0n1) 00:10:28.371 Could not set queue depth (nvme0n2) 00:10:28.371 Could not set queue depth (nvme0n3) 00:10:28.371 Could not set queue depth (nvme0n4) 00:10:28.371 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.371 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.371 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.371 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.371 fio-3.35 00:10:28.371 Starting 4 threads 00:10:29.748 00:10:29.748 job0: (groupid=0, jobs=1): err= 0: pid=66318: Fri Nov 8 07:37:47 2024 00:10:29.748 read: IOPS=5215, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1006msec) 00:10:29.748 slat (usec): min=8, max=6131, avg=86.89, stdev=541.51 00:10:29.748 clat (usec): min=972, max=19862, avg=12166.03, stdev=1450.81 00:10:29.748 lat (usec): min=6949, max=23656, avg=12252.92, stdev=1476.54 00:10:29.748 clat percentiles (usec): 00:10:29.748 | 1.00th=[ 7570], 5.00th=[ 9372], 10.00th=[10945], 20.00th=[11600], 00:10:29.748 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12387], 00:10:29.748 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13173], 95.00th=[13435], 00:10:29.748 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19792], 99.95th=[19792], 00:10:29.748 | 99.99th=[19792] 00:10:29.748 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:10:29.748 slat (usec): min=11, max=7393, avg=89.29, stdev=521.81 00:10:29.748 clat (usec): min=5713, max=15439, avg=11291.58, stdev=1132.00 00:10:29.748 lat (usec): min=7597, max=15539, avg=11380.87, stdev=1032.31 00:10:29.748 clat percentiles (usec): 00:10:29.748 | 1.00th=[ 7570], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10421], 00:10:29.748 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:10:29.748 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12387], 95.00th=[13042], 00:10:29.748 | 99.00th=[13960], 99.50th=[14484], 99.90th=[15401], 99.95th=[15401], 00:10:29.748 | 99.99th=[15401] 00:10:29.748 bw ( KiB/s): min=22520, max=22528, per=27.48%, avg=22524.00, stdev= 5.66, samples=2 00:10:29.748 iops : min= 5630, max= 5632, avg=5631.00, stdev= 1.41, samples=2 00:10:29.748 lat (usec) : 1000=0.01% 00:10:29.748 lat (msec) : 10=8.09%, 20=91.90% 00:10:29.748 cpu : usr=3.38%, sys=15.82%, ctx=222, majf=0, minf=3 00:10:29.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:29.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.748 issued rwts: total=5247,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.748 job1: (groupid=0, jobs=1): err= 0: pid=66319: Fri Nov 8 07:37:47 2024 00:10:29.748 read: IOPS=5167, BW=20.2MiB/s (21.2MB/s)(20.2MiB/1003msec) 00:10:29.748 slat (usec): min=6, max=6066, avg=87.02, stdev=537.87 00:10:29.748 clat (usec): min=1267, max=20387, avg=12272.35, stdev=1503.64 00:10:29.748 lat (usec): min=5895, max=23787, avg=12359.37, stdev=1521.35 00:10:29.748 clat percentiles (usec): 00:10:29.748 | 1.00th=[ 6849], 5.00th=[ 9896], 10.00th=[11338], 20.00th=[11863], 00:10:29.748 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:10:29.748 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13173], 95.00th=[13304], 00:10:29.748 | 99.00th=[18744], 99.50th=[19268], 99.90th=[20317], 99.95th=[20317], 00:10:29.748 | 99.99th=[20317] 00:10:29.748 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:29.748 slat (usec): min=11, max=7636, avg=89.73, stdev=517.02 00:10:29.748 clat (usec): min=5746, max=16332, avg=11276.98, stdev=1258.94 00:10:29.748 lat (usec): min=5771, max=16520, avg=11366.72, stdev=1170.32 00:10:29.748 clat percentiles (usec): 00:10:29.748 | 1.00th=[ 6390], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10552], 00:10:29.748 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11338], 60.00th=[11600], 00:10:29.748 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12649], 95.00th=[12780], 00:10:29.748 | 99.00th=[14877], 99.50th=[15139], 99.90th=[16319], 99.95th=[16319], 00:10:29.748 | 99.99th=[16319] 00:10:29.748 bw ( KiB/s): min=22056, max=22525, per=27.20%, avg=22290.50, stdev=331.63, samples=2 00:10:29.748 iops : min= 5514, max= 5631, avg=5572.50, stdev=82.73, samples=2 00:10:29.748 lat (msec) : 2=0.01%, 10=6.32%, 20=93.60%, 50=0.07% 00:10:29.748 cpu : usr=4.39%, sys=15.17%, ctx=229, majf=0, minf=8 00:10:29.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:29.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.748 issued rwts: total=5183,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.748 job2: (groupid=0, jobs=1): err= 0: pid=66324: Fri Nov 8 07:37:47 2024 00:10:29.748 read: IOPS=4533, BW=17.7MiB/s (18.6MB/s)(17.7MiB/1002msec) 00:10:29.748 slat (usec): min=8, max=7351, avg=104.35, stdev=645.62 00:10:29.748 clat (usec): min=318, max=23405, avg=14412.69, stdev=2170.10 00:10:29.748 lat (usec): min=3536, max=28266, avg=14517.05, stdev=2173.79 00:10:29.748 clat percentiles (usec): 00:10:29.748 | 1.00th=[ 7308], 5.00th=[10421], 10.00th=[13173], 20.00th=[13698], 00:10:29.748 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:10:29.748 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15664], 95.00th=[16450], 00:10:29.748 | 99.00th=[22676], 99.50th=[23200], 99.90th=[23462], 99.95th=[23462], 00:10:29.748 | 99.99th=[23462] 00:10:29.748 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:10:29.748 slat (usec): min=4, max=11578, avg=105.68, stdev=639.48 00:10:29.748 clat (usec): min=6984, max=20929, avg=13375.01, stdev=1435.49 00:10:29.748 lat (usec): min=9252, max=20979, avg=13480.69, stdev=1322.43 00:10:29.748 clat percentiles (usec): 00:10:29.748 | 1.00th=[ 8848], 5.00th=[11469], 10.00th=[11994], 20.00th=[12518], 00:10:29.749 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:10:29.749 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14353], 95.00th=[14746], 00:10:29.749 | 99.00th=[19792], 99.50th=[20317], 99.90th=[20841], 99.95th=[20841], 00:10:29.749 | 99.99th=[20841] 00:10:29.749 bw ( KiB/s): min=17416, max=19486, per=22.51%, avg=18451.00, stdev=1463.71, samples=2 00:10:29.749 iops : min= 4354, max= 4871, avg=4612.50, stdev=365.57, samples=2 00:10:29.749 lat (usec) : 500=0.01% 00:10:29.749 lat (msec) : 4=0.26%, 10=3.26%, 20=94.90%, 50=1.57% 00:10:29.749 cpu : usr=4.70%, sys=12.59%, ctx=198, majf=0, minf=9 00:10:29.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:29.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.749 issued rwts: total=4543,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.749 job3: (groupid=0, jobs=1): err= 0: pid=66325: Fri Nov 8 07:37:47 2024 00:10:29.749 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:10:29.749 slat (usec): min=5, max=9950, avg=103.51, stdev=668.61 00:10:29.749 clat (usec): min=3507, max=24866, avg=14278.73, stdev=1669.90 00:10:29.749 lat (usec): min=3517, max=27399, avg=14382.24, stdev=1695.94 00:10:29.749 clat percentiles (usec): 00:10:29.749 | 1.00th=[ 9110], 5.00th=[12256], 10.00th=[12780], 20.00th=[13698], 00:10:29.749 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:10:29.749 | 70.00th=[14746], 80.00th=[14877], 90.00th=[15533], 95.00th=[15926], 00:10:29.749 | 99.00th=[20317], 99.50th=[21890], 99.90th=[23200], 99.95th=[23200], 00:10:29.749 | 99.99th=[24773] 00:10:29.749 write: IOPS=4774, BW=18.6MiB/s (19.6MB/s)(18.8MiB/1010msec); 0 zone resets 00:10:29.749 slat (usec): min=6, max=8557, avg=100.21, stdev=602.49 00:10:29.749 clat (usec): min=3255, max=18926, avg=12876.06, stdev=1698.74 00:10:29.749 lat (usec): min=3272, max=18934, avg=12976.28, stdev=1614.98 00:10:29.749 clat percentiles (usec): 00:10:29.749 | 1.00th=[ 7111], 5.00th=[10421], 10.00th=[11076], 20.00th=[11994], 00:10:29.749 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:10:29.749 | 70.00th=[13566], 80.00th=[13698], 90.00th=[14222], 95.00th=[14877], 00:10:29.749 | 99.00th=[17695], 99.50th=[18744], 99.90th=[18744], 99.95th=[19006], 00:10:29.749 | 99.99th=[19006] 00:10:29.749 bw ( KiB/s): min=17592, max=20000, per=22.93%, avg=18796.00, stdev=1702.71, samples=2 00:10:29.749 iops : min= 4398, max= 5000, avg=4699.00, stdev=425.68, samples=2 00:10:29.749 lat (msec) : 4=0.18%, 10=3.51%, 20=95.58%, 50=0.73% 00:10:29.749 cpu : usr=4.16%, sys=13.18%, ctx=253, majf=0, minf=5 00:10:29.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:29.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.749 issued rwts: total=4608,4822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.749 00:10:29.749 Run status group 0 (all jobs): 00:10:29.749 READ: bw=75.7MiB/s (79.4MB/s), 17.7MiB/s-20.4MiB/s (18.6MB/s-21.4MB/s), io=76.5MiB (80.2MB), run=1002-1010msec 00:10:29.749 WRITE: bw=80.0MiB/s (83.9MB/s), 18.0MiB/s-21.9MiB/s (18.8MB/s-23.0MB/s), io=80.8MiB (84.8MB), run=1002-1010msec 00:10:29.749 00:10:29.749 Disk stats (read/write): 00:10:29.749 nvme0n1: ios=4650/4616, merge=0/0, ticks=52693/47174, in_queue=99867, util=86.67% 00:10:29.749 nvme0n2: ios=4609/4608, merge=0/0, ticks=52366/46866, in_queue=99232, util=87.54% 00:10:29.749 nvme0n3: ios=3652/4096, merge=0/0, ticks=50051/50625, in_queue=100676, util=88.75% 00:10:29.749 nvme0n4: ios=3720/4096, merge=0/0, ticks=50038/49446, in_queue=99484, util=89.50% 00:10:29.749 07:37:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:29.749 07:37:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66339 00:10:29.749 07:37:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:29.749 07:37:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:29.749 [global] 00:10:29.749 thread=1 00:10:29.749 invalidate=1 00:10:29.749 rw=read 00:10:29.749 time_based=1 00:10:29.749 runtime=10 00:10:29.749 ioengine=libaio 00:10:29.749 direct=1 00:10:29.749 bs=4096 00:10:29.749 iodepth=1 00:10:29.749 norandommap=1 00:10:29.749 numjobs=1 00:10:29.749 00:10:29.749 [job0] 00:10:29.749 filename=/dev/nvme0n1 00:10:29.749 [job1] 00:10:29.749 filename=/dev/nvme0n2 00:10:29.749 [job2] 00:10:29.749 filename=/dev/nvme0n3 00:10:29.749 [job3] 00:10:29.749 filename=/dev/nvme0n4 00:10:29.749 Could not set queue depth (nvme0n1) 00:10:29.749 Could not set queue depth (nvme0n2) 00:10:29.749 Could not set queue depth (nvme0n3) 00:10:29.749 Could not set queue depth (nvme0n4) 00:10:29.749 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.749 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.749 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.749 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.749 fio-3.35 00:10:29.749 Starting 4 threads 00:10:33.037 07:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:33.037 fio: pid=66382, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:33.037 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=74452992, buflen=4096 00:10:33.037 07:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:33.037 fio: pid=66381, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:33.037 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=79388672, buflen=4096 00:10:33.037 07:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.037 07:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:33.296 fio: pid=66379, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:33.296 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=19386368, buflen=4096 00:10:33.296 07:37:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.296 07:37:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:33.555 fio: pid=66380, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:33.555 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=28938240, buflen=4096 00:10:33.555 00:10:33.555 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66379: Fri Nov 8 07:37:51 2024 00:10:33.555 read: IOPS=6200, BW=24.2MiB/s (25.4MB/s)(82.5MiB/3406msec) 00:10:33.555 slat (usec): min=7, max=15546, avg=11.49, stdev=158.13 00:10:33.555 clat (usec): min=51, max=2158, avg=149.03, stdev=26.25 00:10:33.555 lat (usec): min=119, max=15733, avg=160.53, stdev=160.79 00:10:33.555 clat percentiles (usec): 00:10:33.555 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:10:33.555 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:10:33.555 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 172], 00:10:33.555 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 251], 99.95th=[ 482], 00:10:33.555 | 99.99th=[ 1516] 00:10:33.555 bw ( KiB/s): min=23376, max=26024, per=27.45%, avg=24902.67, stdev=1168.79, samples=6 00:10:33.555 iops : min= 5844, max= 6506, avg=6225.67, stdev=292.20, samples=6 00:10:33.555 lat (usec) : 100=0.01%, 250=99.89%, 500=0.06%, 750=0.02%, 1000=0.01% 00:10:33.555 lat (msec) : 2=0.01%, 4=0.01% 00:10:33.555 cpu : usr=1.12%, sys=5.49%, ctx=21129, majf=0, minf=1 00:10:33.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.555 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.555 issued rwts: total=21118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.555 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66380: Fri Nov 8 07:37:51 2024 00:10:33.555 read: IOPS=6476, BW=25.3MiB/s (26.5MB/s)(91.6MiB/3621msec) 00:10:33.555 slat (usec): min=6, max=16497, avg=11.69, stdev=195.75 00:10:33.555 clat (usec): min=68, max=23211, avg=141.96, stdev=164.23 00:10:33.555 lat (usec): min=97, max=23219, avg=153.65, stdev=256.07 00:10:33.555 clat percentiles (usec): 00:10:33.555 | 1.00th=[ 99], 5.00th=[ 115], 10.00th=[ 124], 20.00th=[ 130], 00:10:33.555 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:10:33.555 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:10:33.555 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 717], 99.95th=[ 1680], 00:10:33.555 | 99.99th=[ 4080] 00:10:33.555 bw ( KiB/s): min=23568, max=27056, per=28.60%, avg=25943.00, stdev=1315.72, samples=7 00:10:33.555 iops : min= 5892, max= 6764, avg=6485.57, stdev=328.99, samples=7 00:10:33.555 lat (usec) : 100=1.25%, 250=98.54%, 500=0.08%, 750=0.04%, 1000=0.02% 00:10:33.555 lat (msec) : 2=0.04%, 4=0.02%, 10=0.01%, 50=0.01% 00:10:33.555 cpu : usr=1.27%, sys=5.30%, ctx=23458, majf=0, minf=1 00:10:33.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.555 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.555 issued rwts: total=23450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.555 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66381: Fri Nov 8 07:37:51 2024 00:10:33.555 read: IOPS=6057, BW=23.7MiB/s (24.8MB/s)(75.7MiB/3200msec) 00:10:33.555 slat (usec): min=7, max=11787, avg= 9.42, stdev=104.80 00:10:33.555 clat (usec): min=119, max=2304, avg=154.97, stdev=38.24 00:10:33.555 lat (usec): min=127, max=11954, avg=164.40, stdev=111.82 00:10:33.555 clat percentiles (usec): 00:10:33.555 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 143], 00:10:33.555 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:10:33.555 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 178], 00:10:33.555 | 99.00th=[ 198], 99.50th=[ 227], 99.90th=[ 490], 99.95th=[ 881], 00:10:33.555 | 99.99th=[ 2212] 00:10:33.555 bw ( KiB/s): min=23080, max=25072, per=26.78%, avg=24294.67, stdev=729.54, samples=6 00:10:33.555 iops : min= 5770, max= 6268, avg=6073.67, stdev=182.39, samples=6 00:10:33.555 lat (usec) : 250=99.58%, 500=0.33%, 750=0.03%, 1000=0.03% 00:10:33.555 lat (msec) : 2=0.03%, 4=0.01% 00:10:33.555 cpu : usr=1.53%, sys=4.47%, ctx=19386, majf=0, minf=2 00:10:33.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.555 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.555 issued rwts: total=19383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.555 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66382: Fri Nov 8 07:37:51 2024 00:10:33.555 read: IOPS=6210, BW=24.3MiB/s (25.4MB/s)(71.0MiB/2927msec) 00:10:33.555 slat (nsec): min=7272, max=75079, avg=8483.04, stdev=2964.75 00:10:33.555 clat (usec): min=122, max=640, avg=151.74, stdev=15.18 00:10:33.555 lat (usec): min=130, max=650, avg=160.22, stdev=15.83 00:10:33.555 clat percentiles (usec): 00:10:33.555 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 141], 00:10:33.555 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 00:10:33.555 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 176], 00:10:33.555 | 99.00th=[ 192], 99.50th=[ 198], 99.90th=[ 265], 99.95th=[ 338], 00:10:33.555 | 99.99th=[ 545] 00:10:33.555 bw ( KiB/s): min=24504, max=25424, per=27.53%, avg=24976.00, stdev=404.06, samples=5 00:10:33.555 iops : min= 6126, max= 6356, avg=6244.00, stdev=101.01, samples=5 00:10:33.555 lat (usec) : 250=99.88%, 500=0.09%, 750=0.03% 00:10:33.555 cpu : usr=1.20%, sys=5.02%, ctx=18178, majf=0, minf=2 00:10:33.555 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.555 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.555 issued rwts: total=18178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.555 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.555 00:10:33.555 Run status group 0 (all jobs): 00:10:33.555 READ: bw=88.6MiB/s (92.9MB/s), 23.7MiB/s-25.3MiB/s (24.8MB/s-26.5MB/s), io=321MiB (336MB), run=2927-3621msec 00:10:33.555 00:10:33.555 Disk stats (read/write): 00:10:33.555 nvme0n1: ios=20820/0, merge=0/0, ticks=3140/0, in_queue=3140, util=94.94% 00:10:33.555 nvme0n2: ios=23385/0, merge=0/0, ticks=3349/0, in_queue=3349, util=94.86% 00:10:33.555 nvme0n3: ios=18814/0, merge=0/0, ticks=2918/0, in_queue=2918, util=96.23% 00:10:33.555 nvme0n4: ios=17797/0, merge=0/0, ticks=2715/0, in_queue=2715, util=96.79% 00:10:33.555 07:37:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.555 07:37:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:33.814 07:37:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.814 07:37:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:34.074 07:37:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.074 07:37:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:34.074 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.074 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:34.333 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.333 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:34.593 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:34.593 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66339 00:10:34.593 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:34.593 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.852 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:34.852 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:34.852 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:34.852 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.852 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:34.852 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.852 nvmf hotplug test: fio failed as expected 00:10:34.852 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:34.852 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:34.852 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:34.852 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.852 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:34.852 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.111 rmmod nvme_tcp 00:10:35.111 rmmod nvme_fabrics 00:10:35.111 rmmod nvme_keyring 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 65951 ']' 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 65951 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 65951 ']' 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 65951 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65951 00:10:35.111 killing process with pid 65951 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65951' 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 65951 00:10:35.111 07:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 65951 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.370 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:35.629 00:10:35.629 real 0m19.625s 00:10:35.629 user 1m11.833s 00:10:35.629 sys 0m11.072s 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.629 ************************************ 00:10:35.629 END TEST nvmf_fio_target 00:10:35.629 ************************************ 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.629 ************************************ 00:10:35.629 START TEST nvmf_bdevio 00:10:35.629 ************************************ 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:35.629 * Looking for test storage... 00:10:35.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:35.629 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:35.888 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:35.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.889 --rc genhtml_branch_coverage=1 00:10:35.889 --rc genhtml_function_coverage=1 00:10:35.889 --rc genhtml_legend=1 00:10:35.889 --rc geninfo_all_blocks=1 00:10:35.889 --rc geninfo_unexecuted_blocks=1 00:10:35.889 00:10:35.889 ' 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:35.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.889 --rc genhtml_branch_coverage=1 00:10:35.889 --rc genhtml_function_coverage=1 00:10:35.889 --rc genhtml_legend=1 00:10:35.889 --rc geninfo_all_blocks=1 00:10:35.889 --rc geninfo_unexecuted_blocks=1 00:10:35.889 00:10:35.889 ' 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:35.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.889 --rc genhtml_branch_coverage=1 00:10:35.889 --rc genhtml_function_coverage=1 00:10:35.889 --rc genhtml_legend=1 00:10:35.889 --rc geninfo_all_blocks=1 00:10:35.889 --rc geninfo_unexecuted_blocks=1 00:10:35.889 00:10:35.889 ' 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:35.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.889 --rc genhtml_branch_coverage=1 00:10:35.889 --rc genhtml_function_coverage=1 00:10:35.889 --rc genhtml_legend=1 00:10:35.889 --rc geninfo_all_blocks=1 00:10:35.889 --rc geninfo_unexecuted_blocks=1 00:10:35.889 00:10:35.889 ' 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.889 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.889 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:35.890 Cannot find device "nvmf_init_br" 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:35.890 Cannot find device "nvmf_init_br2" 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:35.890 Cannot find device "nvmf_tgt_br" 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.890 Cannot find device "nvmf_tgt_br2" 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:35.890 Cannot find device "nvmf_init_br" 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:35.890 Cannot find device "nvmf_init_br2" 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:35.890 Cannot find device "nvmf_tgt_br" 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:35.890 Cannot find device "nvmf_tgt_br2" 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:35.890 Cannot find device "nvmf_br" 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:35.890 Cannot find device "nvmf_init_if" 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:35.890 Cannot find device "nvmf_init_if2" 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.890 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.890 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:35.890 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:36.149 07:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:36.149 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:36.149 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:10:36.149 00:10:36.149 --- 10.0.0.3 ping statistics --- 00:10:36.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.149 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:36.149 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:36.149 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:10:36.149 00:10:36.149 --- 10.0.0.4 ping statistics --- 00:10:36.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.149 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:36.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:36.149 00:10:36.149 --- 10.0.0.1 ping statistics --- 00:10:36.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.149 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:36.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:10:36.149 00:10:36.149 --- 10.0.0.2 ping statistics --- 00:10:36.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.149 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66705 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66705 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 66705 ']' 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:36.149 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:36.408 [2024-11-08 07:37:54.133995] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:10:36.408 [2024-11-08 07:37:54.134104] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.408 [2024-11-08 07:37:54.293276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.408 [2024-11-08 07:37:54.348209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.408 [2024-11-08 07:37:54.348275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.408 [2024-11-08 07:37:54.348291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.408 [2024-11-08 07:37:54.348304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.408 [2024-11-08 07:37:54.348315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.408 [2024-11-08 07:37:54.349709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:36.408 [2024-11-08 07:37:54.349804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:36.408 [2024-11-08 07:37:54.350012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:36.408 [2024-11-08 07:37:54.350353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.667 [2024-11-08 07:37:54.398234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:37.233 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:37.233 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:37.234 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:37.234 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:37.234 07:37:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.234 [2024-11-08 07:37:55.026004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.234 Malloc0 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.234 [2024-11-08 07:37:55.090046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:37.234 { 00:10:37.234 "params": { 00:10:37.234 "name": "Nvme$subsystem", 00:10:37.234 "trtype": "$TEST_TRANSPORT", 00:10:37.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:37.234 "adrfam": "ipv4", 00:10:37.234 "trsvcid": "$NVMF_PORT", 00:10:37.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:37.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:37.234 "hdgst": ${hdgst:-false}, 00:10:37.234 "ddgst": ${ddgst:-false} 00:10:37.234 }, 00:10:37.234 "method": "bdev_nvme_attach_controller" 00:10:37.234 } 00:10:37.234 EOF 00:10:37.234 )") 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:37.234 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:37.234 "params": { 00:10:37.234 "name": "Nvme1", 00:10:37.234 "trtype": "tcp", 00:10:37.234 "traddr": "10.0.0.3", 00:10:37.234 "adrfam": "ipv4", 00:10:37.234 "trsvcid": "4420", 00:10:37.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.234 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:37.234 "hdgst": false, 00:10:37.234 "ddgst": false 00:10:37.234 }, 00:10:37.234 "method": "bdev_nvme_attach_controller" 00:10:37.234 }' 00:10:37.234 [2024-11-08 07:37:55.152232] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:10:37.234 [2024-11-08 07:37:55.152332] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66741 ] 00:10:37.493 [2024-11-08 07:37:55.303914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:37.493 [2024-11-08 07:37:55.358044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.493 [2024-11-08 07:37:55.358194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.493 [2024-11-08 07:37:55.358195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.493 [2024-11-08 07:37:55.407674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:37.752 I/O targets: 00:10:37.752 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:37.752 00:10:37.752 00:10:37.752 CUnit - A unit testing framework for C - Version 2.1-3 00:10:37.752 http://cunit.sourceforge.net/ 00:10:37.752 00:10:37.752 00:10:37.752 Suite: bdevio tests on: Nvme1n1 00:10:37.752 Test: blockdev write read block ...passed 00:10:37.752 Test: blockdev write zeroes read block ...passed 00:10:37.752 Test: blockdev write zeroes read no split ...passed 00:10:37.752 Test: blockdev write zeroes read split ...passed 00:10:37.752 Test: blockdev write zeroes read split partial ...passed 00:10:37.752 Test: blockdev reset ...[2024-11-08 07:37:55.543512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:37.752 [2024-11-08 07:37:55.543597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1081180 (9): Bad file descriptor 00:10:37.752 [2024-11-08 07:37:55.562360] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:37.752 passed 00:10:37.752 Test: blockdev write read 8 blocks ...passed 00:10:37.752 Test: blockdev write read size > 128k ...passed 00:10:37.752 Test: blockdev write read invalid size ...passed 00:10:37.752 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:37.752 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:37.752 Test: blockdev write read max offset ...passed 00:10:37.752 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:37.752 Test: blockdev writev readv 8 blocks ...passed 00:10:37.752 Test: blockdev writev readv 30 x 1block ...passed 00:10:37.752 Test: blockdev writev readv block ...passed 00:10:37.752 Test: blockdev writev readv size > 128k ...passed 00:10:37.752 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:37.752 Test: blockdev comparev and writev ...[2024-11-08 07:37:55.569121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.752 [2024-11-08 07:37:55.569183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:37.752 [2024-11-08 07:37:55.569212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.752 [2024-11-08 07:37:55.569230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:37.752 [2024-11-08 07:37:55.569596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.752 [2024-11-08 07:37:55.569627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:37.752 [2024-11-08 07:37:55.569652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.752 [2024-11-08 07:37:55.569669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:37.752 [2024-11-08 07:37:55.570129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.752 [2024-11-08 07:37:55.570159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:37.752 [2024-11-08 07:37:55.570183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.752 [2024-11-08 07:37:55.570200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:37.752 [2024-11-08 07:37:55.570578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.752 [2024-11-08 07:37:55.570622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:37.752 [2024-11-08 07:37:55.570647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.752 [2024-11-08 07:37:55.570664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:37.752 passed 00:10:37.752 Test: blockdev nvme passthru rw ...passed 00:10:37.752 Test: blockdev nvme passthru vendor specific ...[2024-11-08 07:37:55.571401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:37.752 [2024-11-08 07:37:55.571436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:37.752 [2024-11-08 07:37:55.571560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:37.752 [2024-11-08 07:37:55.571583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:37.752 [2024-11-08 07:37:55.571684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:37.752 [2024-11-08 07:37:55.571704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:37.752 [2024-11-08 07:37:55.571816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:37.752 [2024-11-08 07:37:55.571836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:37.752 passed 00:10:37.752 Test: blockdev nvme admin passthru ...passed 00:10:37.752 Test: blockdev copy ...passed 00:10:37.752 00:10:37.752 Run Summary: Type Total Ran Passed Failed Inactive 00:10:37.752 suites 1 1 n/a 0 0 00:10:37.752 tests 23 23 23 0 0 00:10:37.752 asserts 152 152 152 0 n/a 00:10:37.752 00:10:37.752 Elapsed time = 0.142 seconds 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.012 rmmod nvme_tcp 00:10:38.012 rmmod nvme_fabrics 00:10:38.012 rmmod nvme_keyring 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66705 ']' 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66705 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 66705 ']' 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 66705 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66705 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66705' 00:10:38.012 killing process with pid 66705 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 66705 00:10:38.012 07:37:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 66705 00:10:38.270 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.270 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:38.270 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:38.270 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:38.270 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:38.270 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:38.270 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:38.270 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.270 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:38.271 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:38.271 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:38.271 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:38.271 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:38.271 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:38.271 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:38.271 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:38.271 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:38.271 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:38.271 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:38.529 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:38.529 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:38.529 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:38.529 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:38.529 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.529 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.530 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.530 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:38.530 00:10:38.530 real 0m2.917s 00:10:38.530 user 0m8.104s 00:10:38.530 sys 0m0.924s 00:10:38.530 ************************************ 00:10:38.530 END TEST nvmf_bdevio 00:10:38.530 ************************************ 00:10:38.530 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:38.530 07:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.530 07:37:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:38.530 00:10:38.530 real 2m30.999s 00:10:38.530 user 6m23.270s 00:10:38.530 sys 1m1.005s 00:10:38.530 07:37:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:38.530 ************************************ 00:10:38.530 END TEST nvmf_target_core 00:10:38.530 ************************************ 00:10:38.530 07:37:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.530 07:37:56 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:38.530 07:37:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:38.530 07:37:56 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:38.530 07:37:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:38.530 ************************************ 00:10:38.530 START TEST nvmf_target_extra 00:10:38.530 ************************************ 00:10:38.530 07:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:38.789 * Looking for test storage... 00:10:38.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:38.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.789 --rc genhtml_branch_coverage=1 00:10:38.789 --rc genhtml_function_coverage=1 00:10:38.789 --rc genhtml_legend=1 00:10:38.789 --rc geninfo_all_blocks=1 00:10:38.789 --rc geninfo_unexecuted_blocks=1 00:10:38.789 00:10:38.789 ' 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:38.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.789 --rc genhtml_branch_coverage=1 00:10:38.789 --rc genhtml_function_coverage=1 00:10:38.789 --rc genhtml_legend=1 00:10:38.789 --rc geninfo_all_blocks=1 00:10:38.789 --rc geninfo_unexecuted_blocks=1 00:10:38.789 00:10:38.789 ' 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:38.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.789 --rc genhtml_branch_coverage=1 00:10:38.789 --rc genhtml_function_coverage=1 00:10:38.789 --rc genhtml_legend=1 00:10:38.789 --rc geninfo_all_blocks=1 00:10:38.789 --rc geninfo_unexecuted_blocks=1 00:10:38.789 00:10:38.789 ' 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:38.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.789 --rc genhtml_branch_coverage=1 00:10:38.789 --rc genhtml_function_coverage=1 00:10:38.789 --rc genhtml_legend=1 00:10:38.789 --rc geninfo_all_blocks=1 00:10:38.789 --rc geninfo_unexecuted_blocks=1 00:10:38.789 00:10:38.789 ' 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.789 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.790 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:38.790 ************************************ 00:10:38.790 START TEST nvmf_auth_target 00:10:38.790 ************************************ 00:10:38.790 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:39.049 * Looking for test storage... 00:10:39.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.049 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:39.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.050 --rc genhtml_branch_coverage=1 00:10:39.050 --rc genhtml_function_coverage=1 00:10:39.050 --rc genhtml_legend=1 00:10:39.050 --rc geninfo_all_blocks=1 00:10:39.050 --rc geninfo_unexecuted_blocks=1 00:10:39.050 00:10:39.050 ' 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:39.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.050 --rc genhtml_branch_coverage=1 00:10:39.050 --rc genhtml_function_coverage=1 00:10:39.050 --rc genhtml_legend=1 00:10:39.050 --rc geninfo_all_blocks=1 00:10:39.050 --rc geninfo_unexecuted_blocks=1 00:10:39.050 00:10:39.050 ' 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:39.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.050 --rc genhtml_branch_coverage=1 00:10:39.050 --rc genhtml_function_coverage=1 00:10:39.050 --rc genhtml_legend=1 00:10:39.050 --rc geninfo_all_blocks=1 00:10:39.050 --rc geninfo_unexecuted_blocks=1 00:10:39.050 00:10:39.050 ' 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:39.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.050 --rc genhtml_branch_coverage=1 00:10:39.050 --rc genhtml_function_coverage=1 00:10:39.050 --rc genhtml_legend=1 00:10:39.050 --rc geninfo_all_blocks=1 00:10:39.050 --rc geninfo_unexecuted_blocks=1 00:10:39.050 00:10:39.050 ' 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:39.050 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:39.050 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:39.051 Cannot find device "nvmf_init_br" 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:39.051 Cannot find device "nvmf_init_br2" 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:39.051 Cannot find device "nvmf_tgt_br" 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:39.051 Cannot find device "nvmf_tgt_br2" 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:39.051 Cannot find device "nvmf_init_br" 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:39.051 07:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:39.310 Cannot find device "nvmf_init_br2" 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:39.310 Cannot find device "nvmf_tgt_br" 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:39.310 Cannot find device "nvmf_tgt_br2" 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:39.310 Cannot find device "nvmf_br" 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:39.310 Cannot find device "nvmf_init_if" 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:39.310 Cannot find device "nvmf_init_if2" 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:39.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:39.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:39.310 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:39.630 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:39.630 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:10:39.630 00:10:39.630 --- 10.0.0.3 ping statistics --- 00:10:39.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.630 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:39.630 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:39.630 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:39.630 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:10:39.630 00:10:39.630 --- 10.0.0.4 ping statistics --- 00:10:39.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.631 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:39.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:39.631 00:10:39.631 --- 10.0.0.1 ping statistics --- 00:10:39.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.631 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:39.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:10:39.631 00:10:39.631 --- 10.0.0.2 ping statistics --- 00:10:39.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.631 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67025 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67025 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67025 ']' 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:39.631 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.896 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:39.896 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:10:39.896 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.896 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:39.896 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67044 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e82e5ad38fcf67f951b464798ec077445f088b705ff9dacb 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5I3 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e82e5ad38fcf67f951b464798ec077445f088b705ff9dacb 0 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e82e5ad38fcf67f951b464798ec077445f088b705ff9dacb 0 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e82e5ad38fcf67f951b464798ec077445f088b705ff9dacb 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5I3 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5I3 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.5I3 00:10:40.156 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c84eff9cfc91897a0122bb37508c798847959901a0ccb0eda885b5eefa3d97d9 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.DJV 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c84eff9cfc91897a0122bb37508c798847959901a0ccb0eda885b5eefa3d97d9 3 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c84eff9cfc91897a0122bb37508c798847959901a0ccb0eda885b5eefa3d97d9 3 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c84eff9cfc91897a0122bb37508c798847959901a0ccb0eda885b5eefa3d97d9 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:40.157 07:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.DJV 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.DJV 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.DJV 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2540ab5b38ec6a7c1a6cf56115931151 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5UK 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2540ab5b38ec6a7c1a6cf56115931151 1 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2540ab5b38ec6a7c1a6cf56115931151 1 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2540ab5b38ec6a7c1a6cf56115931151 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5UK 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5UK 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.5UK 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f3aa6d56e1838bd4cd8e933e00495088f39c49d4de655d0d 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.TDD 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f3aa6d56e1838bd4cd8e933e00495088f39c49d4de655d0d 2 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f3aa6d56e1838bd4cd8e933e00495088f39c49d4de655d0d 2 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f3aa6d56e1838bd4cd8e933e00495088f39c49d4de655d0d 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:40.157 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.TDD 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.TDD 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.TDD 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d70c9faeaeafa950991fc33d6dbb85d8dad4178122472b0c 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Kwk 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d70c9faeaeafa950991fc33d6dbb85d8dad4178122472b0c 2 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d70c9faeaeafa950991fc33d6dbb85d8dad4178122472b0c 2 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d70c9faeaeafa950991fc33d6dbb85d8dad4178122472b0c 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Kwk 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Kwk 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Kwk 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f8ca439bc2bd50ed9e5dc9761a11c453 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.DIj 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f8ca439bc2bd50ed9e5dc9761a11c453 1 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f8ca439bc2bd50ed9e5dc9761a11c453 1 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f8ca439bc2bd50ed9e5dc9761a11c453 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.DIj 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.DIj 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.DIj 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=690f87b95c96a60e036cf1ffa97dc4d32da5593718d854ce48a3eef82f4b1be5 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.f9H 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 690f87b95c96a60e036cf1ffa97dc4d32da5593718d854ce48a3eef82f4b1be5 3 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 690f87b95c96a60e036cf1ffa97dc4d32da5593718d854ce48a3eef82f4b1be5 3 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=690f87b95c96a60e036cf1ffa97dc4d32da5593718d854ce48a3eef82f4b1be5 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.f9H 00:10:40.417 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.f9H 00:10:40.674 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.f9H 00:10:40.674 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:40.674 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67025 00:10:40.674 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67025 ']' 00:10:40.674 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.674 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:40.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.674 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.674 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:40.674 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.932 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:40.932 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:10:40.932 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67044 /var/tmp/host.sock 00:10:40.932 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67044 ']' 00:10:40.932 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:10:40.932 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:40.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:40.932 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:40.932 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:40.932 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.191 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:41.191 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:10:41.191 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:41.191 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.191 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.191 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.191 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:41.191 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5I3 00:10:41.191 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.191 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.191 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.191 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.5I3 00:10:41.191 07:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.5I3 00:10:41.449 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.DJV ]] 00:10:41.449 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DJV 00:10:41.449 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.449 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.449 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.449 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DJV 00:10:41.449 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DJV 00:10:41.708 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:41.708 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5UK 00:10:41.708 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.708 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.708 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.708 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.5UK 00:10:41.708 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.5UK 00:10:41.966 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.TDD ]] 00:10:41.966 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.TDD 00:10:41.966 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.966 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.966 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.966 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.TDD 00:10:41.966 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.TDD 00:10:42.225 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:42.225 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Kwk 00:10:42.225 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.225 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.225 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.225 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Kwk 00:10:42.225 07:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Kwk 00:10:42.225 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.DIj ]] 00:10:42.225 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DIj 00:10:42.225 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.225 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.225 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.225 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DIj 00:10:42.225 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DIj 00:10:42.483 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:42.483 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.f9H 00:10:42.483 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.483 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.742 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.742 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.f9H 00:10:42.742 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.f9H 00:10:42.742 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:10:42.743 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:42.743 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:42.743 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.743 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:42.743 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:43.002 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:10:43.002 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:43.002 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:43.002 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:43.002 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:43.002 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.002 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.002 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.002 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.002 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.002 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.002 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.002 07:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.259 00:10:43.518 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:43.518 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.518 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:43.518 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.518 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.518 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.518 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.518 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.518 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.518 { 00:10:43.518 "cntlid": 1, 00:10:43.518 "qid": 0, 00:10:43.518 "state": "enabled", 00:10:43.518 "thread": "nvmf_tgt_poll_group_000", 00:10:43.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:10:43.518 "listen_address": { 00:10:43.518 "trtype": "TCP", 00:10:43.518 "adrfam": "IPv4", 00:10:43.518 "traddr": "10.0.0.3", 00:10:43.518 "trsvcid": "4420" 00:10:43.518 }, 00:10:43.518 "peer_address": { 00:10:43.518 "trtype": "TCP", 00:10:43.518 "adrfam": "IPv4", 00:10:43.518 "traddr": "10.0.0.1", 00:10:43.518 "trsvcid": "58910" 00:10:43.518 }, 00:10:43.518 "auth": { 00:10:43.518 "state": "completed", 00:10:43.518 "digest": "sha256", 00:10:43.518 "dhgroup": "null" 00:10:43.518 } 00:10:43.518 } 00:10:43.518 ]' 00:10:43.518 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.776 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.776 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.776 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:43.776 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:43.776 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.777 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.777 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.036 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:10:44.036 07:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:10:48.226 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.226 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:48.226 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.226 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.226 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.226 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.226 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:48.226 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:48.226 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:48.226 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.226 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:48.226 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:48.226 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:48.227 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.227 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.227 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.227 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.227 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.227 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.227 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.227 07:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.227 00:10:48.227 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:48.227 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.227 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.485 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.485 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.485 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.485 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.485 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.485 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.485 { 00:10:48.485 "cntlid": 3, 00:10:48.485 "qid": 0, 00:10:48.485 "state": "enabled", 00:10:48.485 "thread": "nvmf_tgt_poll_group_000", 00:10:48.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:10:48.485 "listen_address": { 00:10:48.485 "trtype": "TCP", 00:10:48.485 "adrfam": "IPv4", 00:10:48.485 "traddr": "10.0.0.3", 00:10:48.485 "trsvcid": "4420" 00:10:48.485 }, 00:10:48.485 "peer_address": { 00:10:48.485 "trtype": "TCP", 00:10:48.485 "adrfam": "IPv4", 00:10:48.485 "traddr": "10.0.0.1", 00:10:48.485 "trsvcid": "34896" 00:10:48.485 }, 00:10:48.485 "auth": { 00:10:48.485 "state": "completed", 00:10:48.485 "digest": "sha256", 00:10:48.485 "dhgroup": "null" 00:10:48.485 } 00:10:48.485 } 00:10:48.485 ]' 00:10:48.485 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:48.485 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:48.485 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:48.744 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:48.744 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:48.744 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.744 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.744 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.744 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:10:48.744 07:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:10:49.679 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.679 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:49.679 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.679 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.679 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.679 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:49.679 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:49.679 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:49.937 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:49.937 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:49.937 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:49.937 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:49.937 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:49.937 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.937 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.937 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.937 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.937 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.937 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.937 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.937 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.195 00:10:50.195 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.195 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.195 07:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:50.453 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.453 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.453 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.453 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.453 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.453 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:50.453 { 00:10:50.453 "cntlid": 5, 00:10:50.453 "qid": 0, 00:10:50.453 "state": "enabled", 00:10:50.453 "thread": "nvmf_tgt_poll_group_000", 00:10:50.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:10:50.453 "listen_address": { 00:10:50.453 "trtype": "TCP", 00:10:50.453 "adrfam": "IPv4", 00:10:50.453 "traddr": "10.0.0.3", 00:10:50.453 "trsvcid": "4420" 00:10:50.453 }, 00:10:50.453 "peer_address": { 00:10:50.453 "trtype": "TCP", 00:10:50.453 "adrfam": "IPv4", 00:10:50.453 "traddr": "10.0.0.1", 00:10:50.453 "trsvcid": "34924" 00:10:50.453 }, 00:10:50.453 "auth": { 00:10:50.453 "state": "completed", 00:10:50.453 "digest": "sha256", 00:10:50.453 "dhgroup": "null" 00:10:50.453 } 00:10:50.453 } 00:10:50.453 ]' 00:10:50.453 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:50.453 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:50.453 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:50.453 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:50.453 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:50.453 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.453 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.453 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.020 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:10:51.020 07:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:10:51.587 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.587 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:51.587 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.587 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.587 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.587 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:51.587 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:51.587 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:51.846 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:51.846 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:51.846 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:51.846 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:51.846 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:51.846 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.846 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:10:51.846 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.846 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.846 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.846 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:51.846 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:51.846 07:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:52.414 00:10:52.414 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:52.414 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:52.414 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.673 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.673 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.673 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.673 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.673 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.673 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:52.673 { 00:10:52.673 "cntlid": 7, 00:10:52.673 "qid": 0, 00:10:52.673 "state": "enabled", 00:10:52.673 "thread": "nvmf_tgt_poll_group_000", 00:10:52.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:10:52.673 "listen_address": { 00:10:52.673 "trtype": "TCP", 00:10:52.673 "adrfam": "IPv4", 00:10:52.673 "traddr": "10.0.0.3", 00:10:52.673 "trsvcid": "4420" 00:10:52.673 }, 00:10:52.673 "peer_address": { 00:10:52.673 "trtype": "TCP", 00:10:52.673 "adrfam": "IPv4", 00:10:52.673 "traddr": "10.0.0.1", 00:10:52.673 "trsvcid": "34954" 00:10:52.673 }, 00:10:52.673 "auth": { 00:10:52.673 "state": "completed", 00:10:52.673 "digest": "sha256", 00:10:52.673 "dhgroup": "null" 00:10:52.673 } 00:10:52.673 } 00:10:52.673 ]' 00:10:52.673 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:52.673 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:52.673 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:52.673 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:52.673 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:52.673 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.673 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.673 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.241 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:10:53.241 07:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:10:53.809 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.809 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:53.809 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.809 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.809 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.809 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:53.809 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:53.809 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:53.809 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:54.068 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:54.068 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.068 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:54.068 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:54.068 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:54.068 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.068 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.068 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.068 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.068 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.068 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.068 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.068 07:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.635 00:10:54.635 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:54.635 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:54.635 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.941 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.941 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.941 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.941 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.941 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.941 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:54.941 { 00:10:54.941 "cntlid": 9, 00:10:54.941 "qid": 0, 00:10:54.941 "state": "enabled", 00:10:54.941 "thread": "nvmf_tgt_poll_group_000", 00:10:54.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:10:54.941 "listen_address": { 00:10:54.941 "trtype": "TCP", 00:10:54.941 "adrfam": "IPv4", 00:10:54.941 "traddr": "10.0.0.3", 00:10:54.941 "trsvcid": "4420" 00:10:54.941 }, 00:10:54.941 "peer_address": { 00:10:54.941 "trtype": "TCP", 00:10:54.941 "adrfam": "IPv4", 00:10:54.941 "traddr": "10.0.0.1", 00:10:54.941 "trsvcid": "34964" 00:10:54.941 }, 00:10:54.941 "auth": { 00:10:54.941 "state": "completed", 00:10:54.941 "digest": "sha256", 00:10:54.941 "dhgroup": "ffdhe2048" 00:10:54.941 } 00:10:54.941 } 00:10:54.941 ]' 00:10:54.941 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:54.941 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.941 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:54.941 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:54.941 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:54.941 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.941 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.941 07:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.200 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:10:55.200 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:10:56.137 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.138 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:56.138 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.138 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.138 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.138 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:56.138 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:56.138 07:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:56.396 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:56.396 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.396 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:56.396 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:56.396 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:56.396 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.396 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.396 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.396 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.396 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.396 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.396 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.396 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.654 00:10:56.654 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:56.654 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.654 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.913 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.913 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.913 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.913 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.913 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.913 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:56.913 { 00:10:56.913 "cntlid": 11, 00:10:56.913 "qid": 0, 00:10:56.913 "state": "enabled", 00:10:56.913 "thread": "nvmf_tgt_poll_group_000", 00:10:56.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:10:56.913 "listen_address": { 00:10:56.913 "trtype": "TCP", 00:10:56.913 "adrfam": "IPv4", 00:10:56.913 "traddr": "10.0.0.3", 00:10:56.913 "trsvcid": "4420" 00:10:56.913 }, 00:10:56.913 "peer_address": { 00:10:56.913 "trtype": "TCP", 00:10:56.913 "adrfam": "IPv4", 00:10:56.913 "traddr": "10.0.0.1", 00:10:56.913 "trsvcid": "52702" 00:10:56.913 }, 00:10:56.913 "auth": { 00:10:56.913 "state": "completed", 00:10:56.913 "digest": "sha256", 00:10:56.913 "dhgroup": "ffdhe2048" 00:10:56.913 } 00:10:56.913 } 00:10:56.913 ]' 00:10:56.913 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:56.913 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.913 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:56.913 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:56.913 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:56.913 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.913 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.913 07:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.171 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:10:57.171 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:10:58.107 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.107 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:10:58.107 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.107 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.107 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.107 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:58.107 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:58.107 07:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:58.365 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:58.365 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:58.365 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:58.365 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:58.365 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:58.365 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.365 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.365 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.365 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.365 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.365 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.365 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.365 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.931 00:10:58.931 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:58.931 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:58.931 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.189 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.189 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.189 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.189 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.189 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.189 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:59.189 { 00:10:59.189 "cntlid": 13, 00:10:59.189 "qid": 0, 00:10:59.189 "state": "enabled", 00:10:59.189 "thread": "nvmf_tgt_poll_group_000", 00:10:59.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:10:59.189 "listen_address": { 00:10:59.189 "trtype": "TCP", 00:10:59.189 "adrfam": "IPv4", 00:10:59.189 "traddr": "10.0.0.3", 00:10:59.189 "trsvcid": "4420" 00:10:59.189 }, 00:10:59.189 "peer_address": { 00:10:59.189 "trtype": "TCP", 00:10:59.189 "adrfam": "IPv4", 00:10:59.189 "traddr": "10.0.0.1", 00:10:59.189 "trsvcid": "52724" 00:10:59.189 }, 00:10:59.189 "auth": { 00:10:59.189 "state": "completed", 00:10:59.189 "digest": "sha256", 00:10:59.189 "dhgroup": "ffdhe2048" 00:10:59.189 } 00:10:59.189 } 00:10:59.189 ]' 00:10:59.190 07:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:59.190 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:59.190 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:59.190 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:59.190 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:59.190 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.190 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.190 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.758 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:10:59.758 07:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:00.325 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.325 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:00.325 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.325 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.325 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.325 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.325 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:00.325 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:00.584 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:00.584 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:00.584 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:00.584 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:00.584 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:00.584 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.584 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:11:00.584 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.584 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.584 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.584 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:00.584 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:00.584 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:01.151 00:11:01.151 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.151 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:01.151 07:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.410 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.410 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.410 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.410 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.410 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.410 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:01.410 { 00:11:01.410 "cntlid": 15, 00:11:01.410 "qid": 0, 00:11:01.410 "state": "enabled", 00:11:01.410 "thread": "nvmf_tgt_poll_group_000", 00:11:01.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:01.410 "listen_address": { 00:11:01.410 "trtype": "TCP", 00:11:01.410 "adrfam": "IPv4", 00:11:01.410 "traddr": "10.0.0.3", 00:11:01.410 "trsvcid": "4420" 00:11:01.410 }, 00:11:01.410 "peer_address": { 00:11:01.410 "trtype": "TCP", 00:11:01.410 "adrfam": "IPv4", 00:11:01.410 "traddr": "10.0.0.1", 00:11:01.410 "trsvcid": "52748" 00:11:01.410 }, 00:11:01.410 "auth": { 00:11:01.410 "state": "completed", 00:11:01.410 "digest": "sha256", 00:11:01.410 "dhgroup": "ffdhe2048" 00:11:01.410 } 00:11:01.410 } 00:11:01.410 ]' 00:11:01.410 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:01.410 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:01.410 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:01.410 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:01.410 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:01.410 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.410 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.410 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.668 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:01.668 07:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:02.232 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.232 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:02.232 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.232 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.233 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.233 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:02.233 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:02.233 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:02.233 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:02.555 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:02.555 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.555 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:02.555 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:02.555 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:02.555 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.555 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.555 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.555 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.555 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.555 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.555 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.555 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.831 00:11:02.831 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.831 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.831 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.090 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.090 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.090 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.090 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.090 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.090 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.090 { 00:11:03.090 "cntlid": 17, 00:11:03.090 "qid": 0, 00:11:03.090 "state": "enabled", 00:11:03.090 "thread": "nvmf_tgt_poll_group_000", 00:11:03.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:03.090 "listen_address": { 00:11:03.090 "trtype": "TCP", 00:11:03.090 "adrfam": "IPv4", 00:11:03.090 "traddr": "10.0.0.3", 00:11:03.090 "trsvcid": "4420" 00:11:03.090 }, 00:11:03.090 "peer_address": { 00:11:03.090 "trtype": "TCP", 00:11:03.090 "adrfam": "IPv4", 00:11:03.090 "traddr": "10.0.0.1", 00:11:03.090 "trsvcid": "52776" 00:11:03.090 }, 00:11:03.090 "auth": { 00:11:03.090 "state": "completed", 00:11:03.090 "digest": "sha256", 00:11:03.090 "dhgroup": "ffdhe3072" 00:11:03.090 } 00:11:03.090 } 00:11:03.090 ]' 00:11:03.090 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.090 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.090 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.090 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:03.090 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.090 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.090 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.090 07:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.350 07:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:11:03.350 07:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:11:03.917 07:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.917 07:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:03.917 07:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.917 07:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.917 07:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.917 07:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.917 07:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:03.917 07:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:04.175 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:04.175 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.175 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:04.175 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:04.175 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:04.175 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.175 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.175 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.175 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.175 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.175 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.175 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.175 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.433 00:11:04.433 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:04.433 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.433 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.691 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.691 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.691 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.691 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.691 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.691 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.691 { 00:11:04.691 "cntlid": 19, 00:11:04.691 "qid": 0, 00:11:04.691 "state": "enabled", 00:11:04.691 "thread": "nvmf_tgt_poll_group_000", 00:11:04.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:04.691 "listen_address": { 00:11:04.691 "trtype": "TCP", 00:11:04.691 "adrfam": "IPv4", 00:11:04.691 "traddr": "10.0.0.3", 00:11:04.691 "trsvcid": "4420" 00:11:04.691 }, 00:11:04.691 "peer_address": { 00:11:04.691 "trtype": "TCP", 00:11:04.691 "adrfam": "IPv4", 00:11:04.691 "traddr": "10.0.0.1", 00:11:04.691 "trsvcid": "52800" 00:11:04.691 }, 00:11:04.691 "auth": { 00:11:04.691 "state": "completed", 00:11:04.691 "digest": "sha256", 00:11:04.691 "dhgroup": "ffdhe3072" 00:11:04.691 } 00:11:04.691 } 00:11:04.691 ]' 00:11:04.691 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.950 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:04.950 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.950 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:04.950 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.950 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.950 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.950 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.208 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:11:05.208 07:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:11:05.775 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.775 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:05.775 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.775 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.775 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.775 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.775 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:05.775 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:06.034 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:06.034 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.034 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:06.034 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:06.034 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:06.034 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.034 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.034 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.034 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.034 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.034 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.034 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.034 07:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.603 00:11:06.603 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.603 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.603 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.603 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.603 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.603 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.603 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.603 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.603 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.603 { 00:11:06.603 "cntlid": 21, 00:11:06.603 "qid": 0, 00:11:06.603 "state": "enabled", 00:11:06.603 "thread": "nvmf_tgt_poll_group_000", 00:11:06.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:06.603 "listen_address": { 00:11:06.603 "trtype": "TCP", 00:11:06.603 "adrfam": "IPv4", 00:11:06.603 "traddr": "10.0.0.3", 00:11:06.603 "trsvcid": "4420" 00:11:06.603 }, 00:11:06.603 "peer_address": { 00:11:06.603 "trtype": "TCP", 00:11:06.603 "adrfam": "IPv4", 00:11:06.603 "traddr": "10.0.0.1", 00:11:06.603 "trsvcid": "51384" 00:11:06.603 }, 00:11:06.603 "auth": { 00:11:06.603 "state": "completed", 00:11:06.603 "digest": "sha256", 00:11:06.603 "dhgroup": "ffdhe3072" 00:11:06.603 } 00:11:06.603 } 00:11:06.603 ]' 00:11:06.603 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.603 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:06.603 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.862 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:06.862 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.862 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.862 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.862 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.122 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:07.122 07:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:07.689 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.689 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:07.689 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.689 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.689 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.689 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:07.689 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:07.689 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:07.948 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:07.948 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.948 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:07.948 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:07.948 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:07.948 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.948 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:11:07.948 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.948 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.948 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.948 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:07.948 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:07.948 07:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:08.206 00:11:08.465 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.465 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.465 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.465 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.465 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.465 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.465 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.465 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.465 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.465 { 00:11:08.465 "cntlid": 23, 00:11:08.465 "qid": 0, 00:11:08.465 "state": "enabled", 00:11:08.465 "thread": "nvmf_tgt_poll_group_000", 00:11:08.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:08.465 "listen_address": { 00:11:08.465 "trtype": "TCP", 00:11:08.465 "adrfam": "IPv4", 00:11:08.465 "traddr": "10.0.0.3", 00:11:08.465 "trsvcid": "4420" 00:11:08.465 }, 00:11:08.465 "peer_address": { 00:11:08.465 "trtype": "TCP", 00:11:08.465 "adrfam": "IPv4", 00:11:08.465 "traddr": "10.0.0.1", 00:11:08.465 "trsvcid": "51424" 00:11:08.465 }, 00:11:08.465 "auth": { 00:11:08.465 "state": "completed", 00:11:08.465 "digest": "sha256", 00:11:08.465 "dhgroup": "ffdhe3072" 00:11:08.465 } 00:11:08.465 } 00:11:08.465 ]' 00:11:08.465 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.724 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:08.724 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.724 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:08.724 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.724 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.724 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.724 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.983 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:08.983 07:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:09.552 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.552 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:09.552 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.552 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.552 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.552 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:09.552 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.552 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:09.552 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:09.811 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:09.811 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:09.811 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:09.811 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:09.811 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:09.811 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.811 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.811 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.811 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.811 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.811 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.811 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.811 07:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.406 00:11:10.406 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.406 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.406 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.665 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.665 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.665 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.665 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.665 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.665 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.665 { 00:11:10.665 "cntlid": 25, 00:11:10.665 "qid": 0, 00:11:10.665 "state": "enabled", 00:11:10.665 "thread": "nvmf_tgt_poll_group_000", 00:11:10.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:10.665 "listen_address": { 00:11:10.665 "trtype": "TCP", 00:11:10.665 "adrfam": "IPv4", 00:11:10.665 "traddr": "10.0.0.3", 00:11:10.665 "trsvcid": "4420" 00:11:10.665 }, 00:11:10.665 "peer_address": { 00:11:10.665 "trtype": "TCP", 00:11:10.665 "adrfam": "IPv4", 00:11:10.665 "traddr": "10.0.0.1", 00:11:10.665 "trsvcid": "51442" 00:11:10.665 }, 00:11:10.665 "auth": { 00:11:10.665 "state": "completed", 00:11:10.665 "digest": "sha256", 00:11:10.665 "dhgroup": "ffdhe4096" 00:11:10.665 } 00:11:10.665 } 00:11:10.665 ]' 00:11:10.666 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.666 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:10.666 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.666 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:10.666 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.666 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.666 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.666 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.924 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:11:10.924 07:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:11:11.492 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.492 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:11.492 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.492 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.751 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.751 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.751 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:11.751 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:12.009 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:12.009 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.009 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:12.009 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:12.009 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:12.009 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.009 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.009 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.009 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.009 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.009 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.009 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.009 07:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.267 00:11:12.267 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.267 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.267 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.526 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.526 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.526 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.526 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.526 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.526 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.526 { 00:11:12.526 "cntlid": 27, 00:11:12.526 "qid": 0, 00:11:12.526 "state": "enabled", 00:11:12.526 "thread": "nvmf_tgt_poll_group_000", 00:11:12.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:12.526 "listen_address": { 00:11:12.526 "trtype": "TCP", 00:11:12.526 "adrfam": "IPv4", 00:11:12.526 "traddr": "10.0.0.3", 00:11:12.526 "trsvcid": "4420" 00:11:12.526 }, 00:11:12.526 "peer_address": { 00:11:12.526 "trtype": "TCP", 00:11:12.526 "adrfam": "IPv4", 00:11:12.526 "traddr": "10.0.0.1", 00:11:12.526 "trsvcid": "51458" 00:11:12.526 }, 00:11:12.526 "auth": { 00:11:12.526 "state": "completed", 00:11:12.526 "digest": "sha256", 00:11:12.526 "dhgroup": "ffdhe4096" 00:11:12.526 } 00:11:12.526 } 00:11:12.526 ]' 00:11:12.526 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.526 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:12.526 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.785 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:12.785 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.785 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.785 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.785 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.044 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:11:13.044 07:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:11:13.608 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.608 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:13.608 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.608 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.608 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.608 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.608 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:13.608 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:14.173 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:14.173 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.173 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:14.173 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:14.173 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:14.173 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.173 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.173 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.173 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.173 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.173 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.173 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.173 07:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.432 00:11:14.432 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.432 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.432 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.691 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.691 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.691 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.691 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.691 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.691 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.691 { 00:11:14.691 "cntlid": 29, 00:11:14.691 "qid": 0, 00:11:14.691 "state": "enabled", 00:11:14.691 "thread": "nvmf_tgt_poll_group_000", 00:11:14.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:14.691 "listen_address": { 00:11:14.691 "trtype": "TCP", 00:11:14.691 "adrfam": "IPv4", 00:11:14.691 "traddr": "10.0.0.3", 00:11:14.691 "trsvcid": "4420" 00:11:14.691 }, 00:11:14.691 "peer_address": { 00:11:14.691 "trtype": "TCP", 00:11:14.691 "adrfam": "IPv4", 00:11:14.691 "traddr": "10.0.0.1", 00:11:14.691 "trsvcid": "51490" 00:11:14.691 }, 00:11:14.691 "auth": { 00:11:14.691 "state": "completed", 00:11:14.691 "digest": "sha256", 00:11:14.691 "dhgroup": "ffdhe4096" 00:11:14.691 } 00:11:14.691 } 00:11:14.691 ]' 00:11:14.691 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.949 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.949 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.950 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:14.950 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.950 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.950 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.950 07:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.207 07:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:15.208 07:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:16.143 07:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.143 07:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:16.143 07:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.143 07:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.143 07:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.143 07:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.143 07:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:16.143 07:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:16.143 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:16.143 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.143 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:16.143 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:16.143 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:16.143 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.143 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:11:16.143 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.143 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.143 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.143 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:16.143 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:16.143 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:16.710 00:11:16.710 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.710 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.710 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.710 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.710 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.710 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.968 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.968 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.968 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.968 { 00:11:16.968 "cntlid": 31, 00:11:16.968 "qid": 0, 00:11:16.968 "state": "enabled", 00:11:16.968 "thread": "nvmf_tgt_poll_group_000", 00:11:16.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:16.968 "listen_address": { 00:11:16.968 "trtype": "TCP", 00:11:16.968 "adrfam": "IPv4", 00:11:16.968 "traddr": "10.0.0.3", 00:11:16.968 "trsvcid": "4420" 00:11:16.968 }, 00:11:16.968 "peer_address": { 00:11:16.968 "trtype": "TCP", 00:11:16.968 "adrfam": "IPv4", 00:11:16.968 "traddr": "10.0.0.1", 00:11:16.968 "trsvcid": "57178" 00:11:16.968 }, 00:11:16.968 "auth": { 00:11:16.968 "state": "completed", 00:11:16.968 "digest": "sha256", 00:11:16.968 "dhgroup": "ffdhe4096" 00:11:16.968 } 00:11:16.968 } 00:11:16.968 ]' 00:11:16.968 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.968 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:16.968 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.968 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:16.968 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.968 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.968 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.968 07:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.227 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:17.227 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:17.798 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.798 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:17.798 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.798 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.798 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.798 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:17.798 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.798 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:17.798 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:18.065 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:18.065 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.065 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:18.065 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:18.065 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:18.065 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.065 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.065 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.065 07:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.065 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.065 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.065 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.065 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.632 00:11:18.632 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.632 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.632 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.891 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.891 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.891 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.891 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.891 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.891 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.891 { 00:11:18.891 "cntlid": 33, 00:11:18.891 "qid": 0, 00:11:18.891 "state": "enabled", 00:11:18.891 "thread": "nvmf_tgt_poll_group_000", 00:11:18.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:18.891 "listen_address": { 00:11:18.891 "trtype": "TCP", 00:11:18.891 "adrfam": "IPv4", 00:11:18.891 "traddr": "10.0.0.3", 00:11:18.891 "trsvcid": "4420" 00:11:18.891 }, 00:11:18.891 "peer_address": { 00:11:18.891 "trtype": "TCP", 00:11:18.891 "adrfam": "IPv4", 00:11:18.891 "traddr": "10.0.0.1", 00:11:18.891 "trsvcid": "57204" 00:11:18.891 }, 00:11:18.891 "auth": { 00:11:18.891 "state": "completed", 00:11:18.891 "digest": "sha256", 00:11:18.891 "dhgroup": "ffdhe6144" 00:11:18.891 } 00:11:18.891 } 00:11:18.891 ]' 00:11:18.891 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.149 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:19.149 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.149 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:19.149 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.149 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.149 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.149 07:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.408 07:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:11:19.408 07:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:11:19.974 07:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.974 07:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:19.974 07:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.974 07:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.974 07:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.974 07:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.974 07:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:19.974 07:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:20.233 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:20.233 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.233 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:20.233 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:20.233 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:20.233 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.233 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.233 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.233 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.233 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.233 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.233 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.234 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.802 00:11:20.802 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.802 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.802 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.802 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.802 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.802 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.802 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.802 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.802 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.802 { 00:11:20.802 "cntlid": 35, 00:11:20.802 "qid": 0, 00:11:20.802 "state": "enabled", 00:11:20.802 "thread": "nvmf_tgt_poll_group_000", 00:11:20.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:20.802 "listen_address": { 00:11:20.802 "trtype": "TCP", 00:11:20.802 "adrfam": "IPv4", 00:11:20.802 "traddr": "10.0.0.3", 00:11:20.802 "trsvcid": "4420" 00:11:20.802 }, 00:11:20.802 "peer_address": { 00:11:20.802 "trtype": "TCP", 00:11:20.802 "adrfam": "IPv4", 00:11:20.802 "traddr": "10.0.0.1", 00:11:20.802 "trsvcid": "57228" 00:11:20.802 }, 00:11:20.802 "auth": { 00:11:20.802 "state": "completed", 00:11:20.802 "digest": "sha256", 00:11:20.802 "dhgroup": "ffdhe6144" 00:11:20.802 } 00:11:20.802 } 00:11:20.802 ]' 00:11:20.802 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.061 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.061 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.061 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:21.061 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.061 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.061 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.061 07:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.320 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:11:21.320 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:11:21.886 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.886 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:21.886 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.886 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.886 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.886 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.886 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:21.886 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:22.145 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:22.145 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.145 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:22.145 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:22.145 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:22.145 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.145 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.145 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.145 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.145 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.145 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.145 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.145 07:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.404 00:11:22.404 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.404 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.404 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.972 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.972 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.972 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.972 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.972 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.972 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.972 { 00:11:22.972 "cntlid": 37, 00:11:22.972 "qid": 0, 00:11:22.972 "state": "enabled", 00:11:22.972 "thread": "nvmf_tgt_poll_group_000", 00:11:22.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:22.972 "listen_address": { 00:11:22.972 "trtype": "TCP", 00:11:22.972 "adrfam": "IPv4", 00:11:22.972 "traddr": "10.0.0.3", 00:11:22.972 "trsvcid": "4420" 00:11:22.972 }, 00:11:22.972 "peer_address": { 00:11:22.972 "trtype": "TCP", 00:11:22.972 "adrfam": "IPv4", 00:11:22.972 "traddr": "10.0.0.1", 00:11:22.972 "trsvcid": "57260" 00:11:22.972 }, 00:11:22.972 "auth": { 00:11:22.972 "state": "completed", 00:11:22.972 "digest": "sha256", 00:11:22.972 "dhgroup": "ffdhe6144" 00:11:22.972 } 00:11:22.972 } 00:11:22.972 ]' 00:11:22.972 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.972 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:22.972 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.972 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:22.972 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.972 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.972 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.972 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.231 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:23.231 07:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:23.799 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.799 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:23.799 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.799 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.799 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.799 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.799 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:23.799 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:24.058 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:24.058 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.058 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:24.058 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:24.058 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:24.058 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.058 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:11:24.058 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.058 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.058 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.058 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:24.058 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:24.058 07:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:24.626 00:11:24.626 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.626 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.626 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:24.884 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.884 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.884 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.884 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.884 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.884 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.884 { 00:11:24.884 "cntlid": 39, 00:11:24.884 "qid": 0, 00:11:24.884 "state": "enabled", 00:11:24.884 "thread": "nvmf_tgt_poll_group_000", 00:11:24.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:24.884 "listen_address": { 00:11:24.884 "trtype": "TCP", 00:11:24.884 "adrfam": "IPv4", 00:11:24.884 "traddr": "10.0.0.3", 00:11:24.884 "trsvcid": "4420" 00:11:24.884 }, 00:11:24.884 "peer_address": { 00:11:24.884 "trtype": "TCP", 00:11:24.884 "adrfam": "IPv4", 00:11:24.884 "traddr": "10.0.0.1", 00:11:24.884 "trsvcid": "57284" 00:11:24.884 }, 00:11:24.884 "auth": { 00:11:24.884 "state": "completed", 00:11:24.884 "digest": "sha256", 00:11:24.884 "dhgroup": "ffdhe6144" 00:11:24.884 } 00:11:24.884 } 00:11:24.884 ]' 00:11:24.884 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.884 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.884 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.884 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:24.884 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.884 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.885 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.885 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.143 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:25.143 07:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:25.749 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.749 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:25.749 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.749 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.749 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.749 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:25.749 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.749 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:25.749 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:26.008 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:26.008 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.008 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:26.008 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:26.008 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:26.008 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.008 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.008 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.008 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.008 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.008 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.008 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.008 07:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.576 00:11:26.576 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.576 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.576 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.835 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.835 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.835 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.835 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.835 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.835 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.835 { 00:11:26.835 "cntlid": 41, 00:11:26.835 "qid": 0, 00:11:26.835 "state": "enabled", 00:11:26.835 "thread": "nvmf_tgt_poll_group_000", 00:11:26.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:26.835 "listen_address": { 00:11:26.835 "trtype": "TCP", 00:11:26.835 "adrfam": "IPv4", 00:11:26.835 "traddr": "10.0.0.3", 00:11:26.835 "trsvcid": "4420" 00:11:26.835 }, 00:11:26.835 "peer_address": { 00:11:26.835 "trtype": "TCP", 00:11:26.835 "adrfam": "IPv4", 00:11:26.835 "traddr": "10.0.0.1", 00:11:26.835 "trsvcid": "41478" 00:11:26.835 }, 00:11:26.835 "auth": { 00:11:26.835 "state": "completed", 00:11:26.835 "digest": "sha256", 00:11:26.835 "dhgroup": "ffdhe8192" 00:11:26.835 } 00:11:26.835 } 00:11:26.835 ]' 00:11:26.835 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.835 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.835 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.094 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:27.094 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.094 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.094 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.094 07:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.353 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:11:27.353 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:11:27.921 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.921 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:27.921 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.921 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.921 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.921 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.921 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:27.921 07:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:28.180 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:28.180 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.180 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:28.180 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:28.180 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:28.180 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.180 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.180 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.180 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.180 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.180 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.180 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.181 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.748 00:11:28.748 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.748 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.748 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.007 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.007 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.007 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.007 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.007 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.007 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.007 { 00:11:29.007 "cntlid": 43, 00:11:29.007 "qid": 0, 00:11:29.007 "state": "enabled", 00:11:29.007 "thread": "nvmf_tgt_poll_group_000", 00:11:29.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:29.007 "listen_address": { 00:11:29.007 "trtype": "TCP", 00:11:29.007 "adrfam": "IPv4", 00:11:29.007 "traddr": "10.0.0.3", 00:11:29.007 "trsvcid": "4420" 00:11:29.007 }, 00:11:29.007 "peer_address": { 00:11:29.007 "trtype": "TCP", 00:11:29.007 "adrfam": "IPv4", 00:11:29.007 "traddr": "10.0.0.1", 00:11:29.007 "trsvcid": "41514" 00:11:29.007 }, 00:11:29.007 "auth": { 00:11:29.007 "state": "completed", 00:11:29.007 "digest": "sha256", 00:11:29.007 "dhgroup": "ffdhe8192" 00:11:29.007 } 00:11:29.007 } 00:11:29.007 ]' 00:11:29.007 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.266 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:29.266 07:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.266 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:29.266 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.266 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.266 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.266 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.525 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:11:29.525 07:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:11:30.092 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.092 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:30.092 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.092 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.092 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.092 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.092 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:30.092 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:30.657 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:30.657 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.658 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:30.658 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:30.658 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:30.658 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.658 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.658 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.658 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.658 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.658 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.658 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.658 07:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.224 00:11:31.224 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.224 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.224 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.483 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.483 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.483 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.483 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.483 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.483 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.483 { 00:11:31.483 "cntlid": 45, 00:11:31.483 "qid": 0, 00:11:31.483 "state": "enabled", 00:11:31.483 "thread": "nvmf_tgt_poll_group_000", 00:11:31.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:31.483 "listen_address": { 00:11:31.483 "trtype": "TCP", 00:11:31.483 "adrfam": "IPv4", 00:11:31.483 "traddr": "10.0.0.3", 00:11:31.483 "trsvcid": "4420" 00:11:31.483 }, 00:11:31.483 "peer_address": { 00:11:31.483 "trtype": "TCP", 00:11:31.483 "adrfam": "IPv4", 00:11:31.483 "traddr": "10.0.0.1", 00:11:31.483 "trsvcid": "41550" 00:11:31.483 }, 00:11:31.483 "auth": { 00:11:31.483 "state": "completed", 00:11:31.483 "digest": "sha256", 00:11:31.483 "dhgroup": "ffdhe8192" 00:11:31.483 } 00:11:31.483 } 00:11:31.483 ]' 00:11:31.483 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.483 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.483 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.742 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:31.742 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.742 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.742 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.742 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.004 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:32.004 07:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:32.573 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.573 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:32.573 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.831 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.831 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.831 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.831 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:32.831 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:33.090 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:33.090 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.090 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:33.090 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:33.090 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:33.090 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.090 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:11:33.090 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.090 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.090 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.090 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:33.090 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.090 07:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.657 00:11:33.657 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.657 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.657 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.915 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.916 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.916 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.916 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.916 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.916 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.916 { 00:11:33.916 "cntlid": 47, 00:11:33.916 "qid": 0, 00:11:33.916 "state": "enabled", 00:11:33.916 "thread": "nvmf_tgt_poll_group_000", 00:11:33.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:33.916 "listen_address": { 00:11:33.916 "trtype": "TCP", 00:11:33.916 "adrfam": "IPv4", 00:11:33.916 "traddr": "10.0.0.3", 00:11:33.916 "trsvcid": "4420" 00:11:33.916 }, 00:11:33.916 "peer_address": { 00:11:33.916 "trtype": "TCP", 00:11:33.916 "adrfam": "IPv4", 00:11:33.916 "traddr": "10.0.0.1", 00:11:33.916 "trsvcid": "41566" 00:11:33.916 }, 00:11:33.916 "auth": { 00:11:33.916 "state": "completed", 00:11:33.916 "digest": "sha256", 00:11:33.916 "dhgroup": "ffdhe8192" 00:11:33.916 } 00:11:33.916 } 00:11:33.916 ]' 00:11:33.916 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.916 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.916 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.916 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:33.916 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.176 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.176 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.176 07:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.437 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:34.437 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:35.004 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.004 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:35.004 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.004 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.263 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.263 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:35.263 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:35.263 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.263 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:35.263 07:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:35.521 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:35.521 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.521 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:35.521 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:35.521 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:35.521 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.521 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.521 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.521 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.521 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.521 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.521 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.521 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.780 00:11:35.780 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.780 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.780 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.039 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.039 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.039 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.039 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.039 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.039 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.039 { 00:11:36.039 "cntlid": 49, 00:11:36.039 "qid": 0, 00:11:36.039 "state": "enabled", 00:11:36.039 "thread": "nvmf_tgt_poll_group_000", 00:11:36.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:36.039 "listen_address": { 00:11:36.039 "trtype": "TCP", 00:11:36.039 "adrfam": "IPv4", 00:11:36.039 "traddr": "10.0.0.3", 00:11:36.039 "trsvcid": "4420" 00:11:36.039 }, 00:11:36.039 "peer_address": { 00:11:36.039 "trtype": "TCP", 00:11:36.039 "adrfam": "IPv4", 00:11:36.039 "traddr": "10.0.0.1", 00:11:36.039 "trsvcid": "41590" 00:11:36.039 }, 00:11:36.039 "auth": { 00:11:36.039 "state": "completed", 00:11:36.039 "digest": "sha384", 00:11:36.039 "dhgroup": "null" 00:11:36.039 } 00:11:36.039 } 00:11:36.039 ]' 00:11:36.039 07:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.297 07:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.297 07:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.297 07:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:36.297 07:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.297 07:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.297 07:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.297 07:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.555 07:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:11:36.555 07:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:11:37.490 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.490 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:37.490 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.490 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.490 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.490 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.490 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:37.490 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:37.749 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:37.749 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.749 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:37.749 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:37.749 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:37.749 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.749 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.749 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.749 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.749 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.749 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.749 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.749 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.008 00:11:38.008 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.008 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.008 07:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.267 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.267 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.267 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.267 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.267 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.267 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.267 { 00:11:38.267 "cntlid": 51, 00:11:38.267 "qid": 0, 00:11:38.267 "state": "enabled", 00:11:38.267 "thread": "nvmf_tgt_poll_group_000", 00:11:38.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:38.267 "listen_address": { 00:11:38.267 "trtype": "TCP", 00:11:38.267 "adrfam": "IPv4", 00:11:38.267 "traddr": "10.0.0.3", 00:11:38.267 "trsvcid": "4420" 00:11:38.267 }, 00:11:38.267 "peer_address": { 00:11:38.267 "trtype": "TCP", 00:11:38.267 "adrfam": "IPv4", 00:11:38.267 "traddr": "10.0.0.1", 00:11:38.267 "trsvcid": "53700" 00:11:38.267 }, 00:11:38.267 "auth": { 00:11:38.267 "state": "completed", 00:11:38.267 "digest": "sha384", 00:11:38.267 "dhgroup": "null" 00:11:38.267 } 00:11:38.267 } 00:11:38.267 ]' 00:11:38.267 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.526 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.526 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.526 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:38.526 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.526 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.526 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.526 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.784 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:11:38.784 07:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.719 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.977 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.977 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.977 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.977 07:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.236 00:11:40.236 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.236 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.236 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.494 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.494 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.494 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.494 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.494 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.494 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.494 { 00:11:40.494 "cntlid": 53, 00:11:40.494 "qid": 0, 00:11:40.494 "state": "enabled", 00:11:40.494 "thread": "nvmf_tgt_poll_group_000", 00:11:40.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:40.494 "listen_address": { 00:11:40.494 "trtype": "TCP", 00:11:40.494 "adrfam": "IPv4", 00:11:40.494 "traddr": "10.0.0.3", 00:11:40.494 "trsvcid": "4420" 00:11:40.494 }, 00:11:40.494 "peer_address": { 00:11:40.494 "trtype": "TCP", 00:11:40.494 "adrfam": "IPv4", 00:11:40.494 "traddr": "10.0.0.1", 00:11:40.494 "trsvcid": "53718" 00:11:40.494 }, 00:11:40.494 "auth": { 00:11:40.494 "state": "completed", 00:11:40.494 "digest": "sha384", 00:11:40.494 "dhgroup": "null" 00:11:40.494 } 00:11:40.494 } 00:11:40.494 ]' 00:11:40.494 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.494 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.494 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.752 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:40.752 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.752 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.752 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.752 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.011 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:41.011 07:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:41.944 07:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:42.558 00:11:42.558 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.558 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.558 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.817 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.817 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.817 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.817 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.817 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.817 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.817 { 00:11:42.817 "cntlid": 55, 00:11:42.817 "qid": 0, 00:11:42.817 "state": "enabled", 00:11:42.817 "thread": "nvmf_tgt_poll_group_000", 00:11:42.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:42.817 "listen_address": { 00:11:42.817 "trtype": "TCP", 00:11:42.817 "adrfam": "IPv4", 00:11:42.817 "traddr": "10.0.0.3", 00:11:42.817 "trsvcid": "4420" 00:11:42.817 }, 00:11:42.817 "peer_address": { 00:11:42.817 "trtype": "TCP", 00:11:42.817 "adrfam": "IPv4", 00:11:42.817 "traddr": "10.0.0.1", 00:11:42.817 "trsvcid": "53752" 00:11:42.817 }, 00:11:42.817 "auth": { 00:11:42.817 "state": "completed", 00:11:42.817 "digest": "sha384", 00:11:42.817 "dhgroup": "null" 00:11:42.817 } 00:11:42.817 } 00:11:42.817 ]' 00:11:42.817 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.817 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:42.817 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.817 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:42.817 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.817 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.817 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.817 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.076 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:43.076 07:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:43.641 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.641 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:43.641 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.641 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.641 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.641 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:43.641 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.641 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:43.641 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:43.899 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:11:43.899 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.899 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:43.899 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:43.899 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:43.899 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.899 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.899 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.899 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.899 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.899 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.899 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.899 07:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.466 00:11:44.466 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.466 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.466 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.724 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.724 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.724 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.724 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.724 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.724 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.724 { 00:11:44.724 "cntlid": 57, 00:11:44.724 "qid": 0, 00:11:44.724 "state": "enabled", 00:11:44.724 "thread": "nvmf_tgt_poll_group_000", 00:11:44.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:44.724 "listen_address": { 00:11:44.724 "trtype": "TCP", 00:11:44.724 "adrfam": "IPv4", 00:11:44.724 "traddr": "10.0.0.3", 00:11:44.724 "trsvcid": "4420" 00:11:44.724 }, 00:11:44.724 "peer_address": { 00:11:44.724 "trtype": "TCP", 00:11:44.724 "adrfam": "IPv4", 00:11:44.724 "traddr": "10.0.0.1", 00:11:44.724 "trsvcid": "53780" 00:11:44.724 }, 00:11:44.724 "auth": { 00:11:44.724 "state": "completed", 00:11:44.724 "digest": "sha384", 00:11:44.724 "dhgroup": "ffdhe2048" 00:11:44.724 } 00:11:44.724 } 00:11:44.724 ]' 00:11:44.724 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.724 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:44.724 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.724 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:44.724 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.724 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.724 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.724 07:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.292 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:11:45.292 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:11:45.861 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.861 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:45.861 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.861 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.861 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.861 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.861 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:45.861 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:46.120 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:46.120 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.120 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:46.120 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:46.120 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:46.120 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.120 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.120 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.120 07:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.120 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.120 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.120 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.120 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.688 00:11:46.688 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.688 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.688 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.688 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.947 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.947 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.947 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.947 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.947 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.947 { 00:11:46.947 "cntlid": 59, 00:11:46.947 "qid": 0, 00:11:46.947 "state": "enabled", 00:11:46.947 "thread": "nvmf_tgt_poll_group_000", 00:11:46.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:46.947 "listen_address": { 00:11:46.947 "trtype": "TCP", 00:11:46.947 "adrfam": "IPv4", 00:11:46.947 "traddr": "10.0.0.3", 00:11:46.947 "trsvcid": "4420" 00:11:46.947 }, 00:11:46.947 "peer_address": { 00:11:46.947 "trtype": "TCP", 00:11:46.947 "adrfam": "IPv4", 00:11:46.947 "traddr": "10.0.0.1", 00:11:46.947 "trsvcid": "59060" 00:11:46.947 }, 00:11:46.947 "auth": { 00:11:46.947 "state": "completed", 00:11:46.947 "digest": "sha384", 00:11:46.947 "dhgroup": "ffdhe2048" 00:11:46.947 } 00:11:46.947 } 00:11:46.947 ]' 00:11:46.947 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.947 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:46.947 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.947 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:46.947 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.947 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.947 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.947 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.207 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:11:47.207 07:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:11:47.775 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.775 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:47.775 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.775 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.775 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.775 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.775 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:47.775 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:48.034 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:48.034 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.034 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:48.034 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:48.034 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:48.034 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.034 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.034 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.034 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.034 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.034 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.034 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.034 07:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.293 00:11:48.293 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.293 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.293 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.552 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.552 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.552 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.552 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.811 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.811 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.811 { 00:11:48.811 "cntlid": 61, 00:11:48.811 "qid": 0, 00:11:48.811 "state": "enabled", 00:11:48.811 "thread": "nvmf_tgt_poll_group_000", 00:11:48.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:48.811 "listen_address": { 00:11:48.811 "trtype": "TCP", 00:11:48.811 "adrfam": "IPv4", 00:11:48.811 "traddr": "10.0.0.3", 00:11:48.811 "trsvcid": "4420" 00:11:48.811 }, 00:11:48.811 "peer_address": { 00:11:48.811 "trtype": "TCP", 00:11:48.811 "adrfam": "IPv4", 00:11:48.811 "traddr": "10.0.0.1", 00:11:48.811 "trsvcid": "59082" 00:11:48.811 }, 00:11:48.811 "auth": { 00:11:48.811 "state": "completed", 00:11:48.811 "digest": "sha384", 00:11:48.811 "dhgroup": "ffdhe2048" 00:11:48.811 } 00:11:48.811 } 00:11:48.811 ]' 00:11:48.811 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.811 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:48.811 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.811 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:48.811 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.811 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.811 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.811 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.071 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:49.071 07:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:49.639 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.639 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:49.639 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.639 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.639 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.639 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.639 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:49.639 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:49.898 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:49.898 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.898 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:49.898 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:49.898 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:49.898 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.898 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:11:49.898 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.898 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.898 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.898 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:49.898 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.898 07:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:50.466 00:11:50.466 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.466 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.466 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.724 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.724 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.724 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.724 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.724 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.724 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.724 { 00:11:50.724 "cntlid": 63, 00:11:50.724 "qid": 0, 00:11:50.724 "state": "enabled", 00:11:50.724 "thread": "nvmf_tgt_poll_group_000", 00:11:50.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:50.724 "listen_address": { 00:11:50.724 "trtype": "TCP", 00:11:50.724 "adrfam": "IPv4", 00:11:50.724 "traddr": "10.0.0.3", 00:11:50.724 "trsvcid": "4420" 00:11:50.724 }, 00:11:50.724 "peer_address": { 00:11:50.724 "trtype": "TCP", 00:11:50.724 "adrfam": "IPv4", 00:11:50.724 "traddr": "10.0.0.1", 00:11:50.724 "trsvcid": "59106" 00:11:50.724 }, 00:11:50.725 "auth": { 00:11:50.725 "state": "completed", 00:11:50.725 "digest": "sha384", 00:11:50.725 "dhgroup": "ffdhe2048" 00:11:50.725 } 00:11:50.725 } 00:11:50.725 ]' 00:11:50.725 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.725 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:50.725 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.725 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:50.725 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.725 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.725 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.725 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.984 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:50.984 07:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:51.552 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.552 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:51.552 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.552 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.552 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.552 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:51.552 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.552 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:51.552 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:51.817 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:51.817 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.817 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:51.817 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:51.817 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:51.817 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.817 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.817 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.817 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.817 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.817 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.817 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.817 07:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.387 00:11:52.387 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.387 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.387 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.387 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.387 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.387 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.387 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.646 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.646 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.646 { 00:11:52.646 "cntlid": 65, 00:11:52.646 "qid": 0, 00:11:52.646 "state": "enabled", 00:11:52.646 "thread": "nvmf_tgt_poll_group_000", 00:11:52.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:52.646 "listen_address": { 00:11:52.646 "trtype": "TCP", 00:11:52.646 "adrfam": "IPv4", 00:11:52.646 "traddr": "10.0.0.3", 00:11:52.646 "trsvcid": "4420" 00:11:52.646 }, 00:11:52.646 "peer_address": { 00:11:52.646 "trtype": "TCP", 00:11:52.646 "adrfam": "IPv4", 00:11:52.646 "traddr": "10.0.0.1", 00:11:52.646 "trsvcid": "59138" 00:11:52.646 }, 00:11:52.646 "auth": { 00:11:52.646 "state": "completed", 00:11:52.646 "digest": "sha384", 00:11:52.646 "dhgroup": "ffdhe3072" 00:11:52.646 } 00:11:52.646 } 00:11:52.646 ]' 00:11:52.646 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.646 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:52.646 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.646 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:52.646 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.646 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.646 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.646 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.905 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:11:52.905 07:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:11:53.842 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.842 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:53.842 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.842 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.843 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.843 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.843 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:53.843 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:54.101 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:54.101 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.101 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:54.101 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:54.101 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:54.101 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.101 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.101 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.101 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.101 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.101 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.102 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.102 07:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.360 00:11:54.360 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.360 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.360 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.619 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.620 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.620 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.620 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.620 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.620 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.620 { 00:11:54.620 "cntlid": 67, 00:11:54.620 "qid": 0, 00:11:54.620 "state": "enabled", 00:11:54.620 "thread": "nvmf_tgt_poll_group_000", 00:11:54.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:54.620 "listen_address": { 00:11:54.620 "trtype": "TCP", 00:11:54.620 "adrfam": "IPv4", 00:11:54.620 "traddr": "10.0.0.3", 00:11:54.620 "trsvcid": "4420" 00:11:54.620 }, 00:11:54.620 "peer_address": { 00:11:54.620 "trtype": "TCP", 00:11:54.620 "adrfam": "IPv4", 00:11:54.620 "traddr": "10.0.0.1", 00:11:54.620 "trsvcid": "59170" 00:11:54.620 }, 00:11:54.620 "auth": { 00:11:54.620 "state": "completed", 00:11:54.620 "digest": "sha384", 00:11:54.620 "dhgroup": "ffdhe3072" 00:11:54.620 } 00:11:54.620 } 00:11:54.620 ]' 00:11:54.620 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.879 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:54.879 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.879 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:54.879 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.879 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.879 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.879 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.138 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:11:55.138 07:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:11:55.705 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.705 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:55.705 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.705 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.705 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.705 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.705 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:55.705 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:55.979 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:55.979 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.979 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:55.979 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:55.979 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:55.979 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.979 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.979 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.979 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.979 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.979 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.979 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.979 07:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.592 00:11:56.592 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.592 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.592 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.851 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.851 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.851 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.851 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.851 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.851 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.851 { 00:11:56.851 "cntlid": 69, 00:11:56.851 "qid": 0, 00:11:56.851 "state": "enabled", 00:11:56.851 "thread": "nvmf_tgt_poll_group_000", 00:11:56.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:56.851 "listen_address": { 00:11:56.851 "trtype": "TCP", 00:11:56.851 "adrfam": "IPv4", 00:11:56.851 "traddr": "10.0.0.3", 00:11:56.851 "trsvcid": "4420" 00:11:56.851 }, 00:11:56.851 "peer_address": { 00:11:56.851 "trtype": "TCP", 00:11:56.851 "adrfam": "IPv4", 00:11:56.851 "traddr": "10.0.0.1", 00:11:56.851 "trsvcid": "52002" 00:11:56.851 }, 00:11:56.851 "auth": { 00:11:56.851 "state": "completed", 00:11:56.851 "digest": "sha384", 00:11:56.851 "dhgroup": "ffdhe3072" 00:11:56.851 } 00:11:56.851 } 00:11:56.851 ]' 00:11:56.851 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.851 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:56.851 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.851 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:56.851 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.851 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.851 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.851 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.110 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:57.110 07:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:11:57.678 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.678 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:57.678 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.678 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.678 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.678 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.678 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:57.678 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:57.937 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:57.937 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.938 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:57.938 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:57.938 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:57.938 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.938 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:11:57.938 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.938 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.938 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.938 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:57.938 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:57.938 07:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.506 00:11:58.506 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.506 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.506 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.764 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.764 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.764 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.764 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.764 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.764 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.764 { 00:11:58.764 "cntlid": 71, 00:11:58.764 "qid": 0, 00:11:58.764 "state": "enabled", 00:11:58.764 "thread": "nvmf_tgt_poll_group_000", 00:11:58.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:11:58.764 "listen_address": { 00:11:58.764 "trtype": "TCP", 00:11:58.764 "adrfam": "IPv4", 00:11:58.764 "traddr": "10.0.0.3", 00:11:58.764 "trsvcid": "4420" 00:11:58.764 }, 00:11:58.764 "peer_address": { 00:11:58.764 "trtype": "TCP", 00:11:58.764 "adrfam": "IPv4", 00:11:58.764 "traddr": "10.0.0.1", 00:11:58.764 "trsvcid": "52014" 00:11:58.764 }, 00:11:58.764 "auth": { 00:11:58.764 "state": "completed", 00:11:58.764 "digest": "sha384", 00:11:58.764 "dhgroup": "ffdhe3072" 00:11:58.764 } 00:11:58.764 } 00:11:58.764 ]' 00:11:58.764 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.764 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:58.764 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.764 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:58.764 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.765 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.765 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.765 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.331 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:59.331 07:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:11:59.590 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.848 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:11:59.848 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.848 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.848 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.848 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:59.848 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.848 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:59.848 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:00.106 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:00.106 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.106 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:00.106 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:00.106 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:00.106 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.106 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.106 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.106 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.106 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.106 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.106 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.106 07:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.363 00:12:00.364 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.364 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.364 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.622 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.622 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.622 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.622 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.881 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.881 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.881 { 00:12:00.881 "cntlid": 73, 00:12:00.881 "qid": 0, 00:12:00.881 "state": "enabled", 00:12:00.881 "thread": "nvmf_tgt_poll_group_000", 00:12:00.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:00.881 "listen_address": { 00:12:00.881 "trtype": "TCP", 00:12:00.881 "adrfam": "IPv4", 00:12:00.881 "traddr": "10.0.0.3", 00:12:00.881 "trsvcid": "4420" 00:12:00.881 }, 00:12:00.881 "peer_address": { 00:12:00.881 "trtype": "TCP", 00:12:00.881 "adrfam": "IPv4", 00:12:00.881 "traddr": "10.0.0.1", 00:12:00.881 "trsvcid": "52034" 00:12:00.881 }, 00:12:00.881 "auth": { 00:12:00.881 "state": "completed", 00:12:00.881 "digest": "sha384", 00:12:00.881 "dhgroup": "ffdhe4096" 00:12:00.881 } 00:12:00.881 } 00:12:00.881 ]' 00:12:00.881 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.881 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:00.881 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.881 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:00.881 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.881 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.881 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.881 07:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.140 07:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:01.140 07:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:02.077 07:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.077 07:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:02.077 07:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.077 07:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.077 07:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.077 07:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.077 07:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:02.077 07:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:02.077 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:02.077 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.077 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:02.077 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:02.077 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:02.077 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.077 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.077 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.077 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.077 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.077 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.077 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.077 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.660 00:12:02.660 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.660 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.660 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.922 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.922 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.922 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.922 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.922 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.922 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.922 { 00:12:02.922 "cntlid": 75, 00:12:02.922 "qid": 0, 00:12:02.922 "state": "enabled", 00:12:02.922 "thread": "nvmf_tgt_poll_group_000", 00:12:02.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:02.922 "listen_address": { 00:12:02.922 "trtype": "TCP", 00:12:02.922 "adrfam": "IPv4", 00:12:02.922 "traddr": "10.0.0.3", 00:12:02.922 "trsvcid": "4420" 00:12:02.922 }, 00:12:02.922 "peer_address": { 00:12:02.922 "trtype": "TCP", 00:12:02.922 "adrfam": "IPv4", 00:12:02.922 "traddr": "10.0.0.1", 00:12:02.922 "trsvcid": "52056" 00:12:02.922 }, 00:12:02.922 "auth": { 00:12:02.922 "state": "completed", 00:12:02.922 "digest": "sha384", 00:12:02.922 "dhgroup": "ffdhe4096" 00:12:02.922 } 00:12:02.922 } 00:12:02.922 ]' 00:12:02.922 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.922 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.922 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.922 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:02.922 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.922 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.922 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.922 07:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.490 07:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:12:03.491 07:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:12:03.749 07:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.749 07:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:03.749 07:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.749 07:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.009 07:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.009 07:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.009 07:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:04.009 07:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:04.267 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:04.267 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.267 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:04.267 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:04.267 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:04.267 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.268 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.268 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.268 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.268 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.268 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.268 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.268 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.525 00:12:04.525 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.525 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.526 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.783 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.783 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.783 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.783 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.783 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.783 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.783 { 00:12:04.783 "cntlid": 77, 00:12:04.783 "qid": 0, 00:12:04.783 "state": "enabled", 00:12:04.783 "thread": "nvmf_tgt_poll_group_000", 00:12:04.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:04.783 "listen_address": { 00:12:04.784 "trtype": "TCP", 00:12:04.784 "adrfam": "IPv4", 00:12:04.784 "traddr": "10.0.0.3", 00:12:04.784 "trsvcid": "4420" 00:12:04.784 }, 00:12:04.784 "peer_address": { 00:12:04.784 "trtype": "TCP", 00:12:04.784 "adrfam": "IPv4", 00:12:04.784 "traddr": "10.0.0.1", 00:12:04.784 "trsvcid": "52070" 00:12:04.784 }, 00:12:04.784 "auth": { 00:12:04.784 "state": "completed", 00:12:04.784 "digest": "sha384", 00:12:04.784 "dhgroup": "ffdhe4096" 00:12:04.784 } 00:12:04.784 } 00:12:04.784 ]' 00:12:04.784 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.042 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:05.042 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.042 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:05.042 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.042 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.042 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.042 07:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.300 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:12:05.300 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:05.868 07:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:06.434 00:12:06.434 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.434 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.434 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.434 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.434 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.434 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.434 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.434 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.434 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.434 { 00:12:06.434 "cntlid": 79, 00:12:06.434 "qid": 0, 00:12:06.434 "state": "enabled", 00:12:06.434 "thread": "nvmf_tgt_poll_group_000", 00:12:06.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:06.434 "listen_address": { 00:12:06.434 "trtype": "TCP", 00:12:06.434 "adrfam": "IPv4", 00:12:06.434 "traddr": "10.0.0.3", 00:12:06.434 "trsvcid": "4420" 00:12:06.434 }, 00:12:06.434 "peer_address": { 00:12:06.434 "trtype": "TCP", 00:12:06.434 "adrfam": "IPv4", 00:12:06.434 "traddr": "10.0.0.1", 00:12:06.434 "trsvcid": "52090" 00:12:06.434 }, 00:12:06.434 "auth": { 00:12:06.434 "state": "completed", 00:12:06.434 "digest": "sha384", 00:12:06.434 "dhgroup": "ffdhe4096" 00:12:06.434 } 00:12:06.434 } 00:12:06.434 ]' 00:12:06.434 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.434 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.434 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.692 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:06.692 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.692 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.692 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.692 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.951 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:12:06.951 07:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:12:07.517 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.517 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:07.517 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.517 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.517 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.517 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:07.517 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.517 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:07.517 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:07.775 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:07.775 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.775 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:07.775 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:07.775 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:07.775 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.775 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.775 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.775 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.775 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.775 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.775 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.775 07:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.343 00:12:08.343 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.343 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.343 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.602 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.602 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.602 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.602 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.602 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.602 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.602 { 00:12:08.602 "cntlid": 81, 00:12:08.602 "qid": 0, 00:12:08.602 "state": "enabled", 00:12:08.602 "thread": "nvmf_tgt_poll_group_000", 00:12:08.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:08.602 "listen_address": { 00:12:08.602 "trtype": "TCP", 00:12:08.602 "adrfam": "IPv4", 00:12:08.602 "traddr": "10.0.0.3", 00:12:08.602 "trsvcid": "4420" 00:12:08.602 }, 00:12:08.602 "peer_address": { 00:12:08.602 "trtype": "TCP", 00:12:08.602 "adrfam": "IPv4", 00:12:08.602 "traddr": "10.0.0.1", 00:12:08.602 "trsvcid": "53930" 00:12:08.602 }, 00:12:08.602 "auth": { 00:12:08.602 "state": "completed", 00:12:08.602 "digest": "sha384", 00:12:08.602 "dhgroup": "ffdhe6144" 00:12:08.602 } 00:12:08.602 } 00:12:08.602 ]' 00:12:08.602 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.602 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:08.602 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.861 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:08.861 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.861 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.861 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.861 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.119 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:09.119 07:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:09.686 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.686 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:09.686 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.686 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.686 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.686 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.686 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:09.686 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:09.952 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:09.952 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.952 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:09.952 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:09.952 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:09.952 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.952 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.952 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.952 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.952 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.952 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.952 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.952 07:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.222 00:12:10.222 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.222 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.222 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.789 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.789 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.789 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.789 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.789 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.789 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.789 { 00:12:10.789 "cntlid": 83, 00:12:10.789 "qid": 0, 00:12:10.789 "state": "enabled", 00:12:10.789 "thread": "nvmf_tgt_poll_group_000", 00:12:10.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:10.789 "listen_address": { 00:12:10.789 "trtype": "TCP", 00:12:10.789 "adrfam": "IPv4", 00:12:10.789 "traddr": "10.0.0.3", 00:12:10.789 "trsvcid": "4420" 00:12:10.789 }, 00:12:10.789 "peer_address": { 00:12:10.789 "trtype": "TCP", 00:12:10.789 "adrfam": "IPv4", 00:12:10.789 "traddr": "10.0.0.1", 00:12:10.789 "trsvcid": "53938" 00:12:10.789 }, 00:12:10.789 "auth": { 00:12:10.789 "state": "completed", 00:12:10.789 "digest": "sha384", 00:12:10.789 "dhgroup": "ffdhe6144" 00:12:10.789 } 00:12:10.789 } 00:12:10.789 ]' 00:12:10.789 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.789 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.789 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.789 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:10.789 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.789 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.789 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.789 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.048 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:12:11.048 07:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:12:11.615 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.615 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:11.615 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.615 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.615 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.615 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.615 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:11.615 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:11.875 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:11.875 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.875 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:11.875 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:11.875 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:11.875 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.875 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.875 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.875 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.875 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.875 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.875 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.875 07:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.442 00:12:12.442 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.442 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.442 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.701 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.702 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.702 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.702 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.702 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.702 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.702 { 00:12:12.702 "cntlid": 85, 00:12:12.702 "qid": 0, 00:12:12.702 "state": "enabled", 00:12:12.702 "thread": "nvmf_tgt_poll_group_000", 00:12:12.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:12.702 "listen_address": { 00:12:12.702 "trtype": "TCP", 00:12:12.702 "adrfam": "IPv4", 00:12:12.702 "traddr": "10.0.0.3", 00:12:12.702 "trsvcid": "4420" 00:12:12.702 }, 00:12:12.702 "peer_address": { 00:12:12.702 "trtype": "TCP", 00:12:12.702 "adrfam": "IPv4", 00:12:12.702 "traddr": "10.0.0.1", 00:12:12.702 "trsvcid": "53972" 00:12:12.702 }, 00:12:12.702 "auth": { 00:12:12.702 "state": "completed", 00:12:12.702 "digest": "sha384", 00:12:12.702 "dhgroup": "ffdhe6144" 00:12:12.702 } 00:12:12.702 } 00:12:12.702 ]' 00:12:12.702 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.702 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.702 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.702 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:12.702 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.702 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.702 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.702 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.961 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:12:12.961 07:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:12:13.898 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.898 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:13.898 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.898 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.898 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.898 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.898 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:13.898 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:14.157 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:14.158 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.158 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:14.158 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:14.158 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:14.158 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.158 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:12:14.158 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.158 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.158 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.158 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:14.158 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.158 07:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.416 00:12:14.416 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.674 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.674 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.933 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.933 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.933 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.933 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.933 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.933 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.933 { 00:12:14.933 "cntlid": 87, 00:12:14.933 "qid": 0, 00:12:14.933 "state": "enabled", 00:12:14.933 "thread": "nvmf_tgt_poll_group_000", 00:12:14.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:14.933 "listen_address": { 00:12:14.933 "trtype": "TCP", 00:12:14.933 "adrfam": "IPv4", 00:12:14.933 "traddr": "10.0.0.3", 00:12:14.933 "trsvcid": "4420" 00:12:14.933 }, 00:12:14.933 "peer_address": { 00:12:14.933 "trtype": "TCP", 00:12:14.933 "adrfam": "IPv4", 00:12:14.933 "traddr": "10.0.0.1", 00:12:14.933 "trsvcid": "54000" 00:12:14.933 }, 00:12:14.933 "auth": { 00:12:14.933 "state": "completed", 00:12:14.933 "digest": "sha384", 00:12:14.933 "dhgroup": "ffdhe6144" 00:12:14.933 } 00:12:14.933 } 00:12:14.933 ]' 00:12:14.933 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.933 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.933 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.933 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:14.933 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.933 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.933 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.933 07:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.191 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:12:15.191 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:12:16.127 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.127 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:16.127 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.127 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.127 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.127 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:16.127 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.127 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:16.127 07:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:16.385 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:16.385 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.385 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:16.386 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:16.386 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:16.386 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.386 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.386 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.386 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.386 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.386 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.386 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.386 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.952 00:12:16.952 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.952 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.952 07:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.519 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.519 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.519 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.519 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.520 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.520 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.520 { 00:12:17.520 "cntlid": 89, 00:12:17.520 "qid": 0, 00:12:17.520 "state": "enabled", 00:12:17.520 "thread": "nvmf_tgt_poll_group_000", 00:12:17.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:17.520 "listen_address": { 00:12:17.520 "trtype": "TCP", 00:12:17.520 "adrfam": "IPv4", 00:12:17.520 "traddr": "10.0.0.3", 00:12:17.520 "trsvcid": "4420" 00:12:17.520 }, 00:12:17.520 "peer_address": { 00:12:17.520 "trtype": "TCP", 00:12:17.520 "adrfam": "IPv4", 00:12:17.520 "traddr": "10.0.0.1", 00:12:17.520 "trsvcid": "47968" 00:12:17.520 }, 00:12:17.520 "auth": { 00:12:17.520 "state": "completed", 00:12:17.520 "digest": "sha384", 00:12:17.520 "dhgroup": "ffdhe8192" 00:12:17.520 } 00:12:17.520 } 00:12:17.520 ]' 00:12:17.520 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.520 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:17.520 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.520 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:17.520 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.520 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.520 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.520 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.778 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:17.778 07:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:18.769 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.769 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:18.769 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.769 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.769 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.769 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.769 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:18.769 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:19.027 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:19.027 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.027 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:19.027 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:19.027 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:19.027 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.027 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.027 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.027 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.027 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.027 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.027 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.027 07:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.593 00:12:19.593 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.593 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.593 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.852 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.852 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.852 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.852 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.852 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.852 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.852 { 00:12:19.852 "cntlid": 91, 00:12:19.852 "qid": 0, 00:12:19.852 "state": "enabled", 00:12:19.852 "thread": "nvmf_tgt_poll_group_000", 00:12:19.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:19.852 "listen_address": { 00:12:19.852 "trtype": "TCP", 00:12:19.852 "adrfam": "IPv4", 00:12:19.852 "traddr": "10.0.0.3", 00:12:19.852 "trsvcid": "4420" 00:12:19.852 }, 00:12:19.852 "peer_address": { 00:12:19.852 "trtype": "TCP", 00:12:19.852 "adrfam": "IPv4", 00:12:19.852 "traddr": "10.0.0.1", 00:12:19.852 "trsvcid": "47984" 00:12:19.852 }, 00:12:19.852 "auth": { 00:12:19.852 "state": "completed", 00:12:19.852 "digest": "sha384", 00:12:19.852 "dhgroup": "ffdhe8192" 00:12:19.852 } 00:12:19.852 } 00:12:19.852 ]' 00:12:19.852 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.852 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.852 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.111 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:20.111 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.111 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.111 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.111 07:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.370 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:12:20.370 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:12:21.304 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.304 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:21.304 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.304 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.304 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.304 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.304 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:21.304 07:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:21.304 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:21.304 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.304 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:21.304 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:21.304 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:21.304 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.304 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.304 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.304 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.304 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.304 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.304 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.304 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.871 00:12:21.871 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.871 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.871 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.129 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.129 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.129 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.129 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.129 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.129 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.129 { 00:12:22.129 "cntlid": 93, 00:12:22.129 "qid": 0, 00:12:22.129 "state": "enabled", 00:12:22.129 "thread": "nvmf_tgt_poll_group_000", 00:12:22.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:22.129 "listen_address": { 00:12:22.129 "trtype": "TCP", 00:12:22.129 "adrfam": "IPv4", 00:12:22.129 "traddr": "10.0.0.3", 00:12:22.129 "trsvcid": "4420" 00:12:22.129 }, 00:12:22.129 "peer_address": { 00:12:22.129 "trtype": "TCP", 00:12:22.129 "adrfam": "IPv4", 00:12:22.129 "traddr": "10.0.0.1", 00:12:22.129 "trsvcid": "47996" 00:12:22.129 }, 00:12:22.129 "auth": { 00:12:22.129 "state": "completed", 00:12:22.129 "digest": "sha384", 00:12:22.129 "dhgroup": "ffdhe8192" 00:12:22.129 } 00:12:22.129 } 00:12:22.129 ]' 00:12:22.129 07:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.129 07:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:22.129 07:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.129 07:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:22.129 07:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.388 07:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.388 07:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.388 07:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.647 07:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:12:22.647 07:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:12:23.214 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.214 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:23.214 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.214 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.214 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.214 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.214 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:23.214 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:23.782 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:23.782 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.782 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:23.782 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:23.782 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:23.782 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.782 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:12:23.782 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.782 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.782 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.782 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:23.782 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:23.782 07:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:24.350 00:12:24.350 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.350 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.350 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.608 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.608 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.608 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.608 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.608 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.608 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.608 { 00:12:24.608 "cntlid": 95, 00:12:24.608 "qid": 0, 00:12:24.608 "state": "enabled", 00:12:24.608 "thread": "nvmf_tgt_poll_group_000", 00:12:24.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:24.608 "listen_address": { 00:12:24.608 "trtype": "TCP", 00:12:24.608 "adrfam": "IPv4", 00:12:24.608 "traddr": "10.0.0.3", 00:12:24.608 "trsvcid": "4420" 00:12:24.608 }, 00:12:24.608 "peer_address": { 00:12:24.608 "trtype": "TCP", 00:12:24.608 "adrfam": "IPv4", 00:12:24.608 "traddr": "10.0.0.1", 00:12:24.608 "trsvcid": "48022" 00:12:24.608 }, 00:12:24.608 "auth": { 00:12:24.608 "state": "completed", 00:12:24.608 "digest": "sha384", 00:12:24.608 "dhgroup": "ffdhe8192" 00:12:24.608 } 00:12:24.608 } 00:12:24.608 ]' 00:12:24.608 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.608 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:24.608 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.608 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:24.608 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.867 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.867 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.867 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.867 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:12:24.867 07:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:12:25.806 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.806 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:25.806 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.806 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.806 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.806 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:25.806 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:25.806 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.806 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:25.806 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:26.064 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:26.064 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.064 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:26.064 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:26.064 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:26.064 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.064 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.064 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.064 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.064 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.064 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.064 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.064 07:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.323 00:12:26.582 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.582 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.582 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.840 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.840 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.840 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.840 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.840 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.840 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.840 { 00:12:26.840 "cntlid": 97, 00:12:26.840 "qid": 0, 00:12:26.840 "state": "enabled", 00:12:26.840 "thread": "nvmf_tgt_poll_group_000", 00:12:26.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:26.840 "listen_address": { 00:12:26.840 "trtype": "TCP", 00:12:26.840 "adrfam": "IPv4", 00:12:26.840 "traddr": "10.0.0.3", 00:12:26.840 "trsvcid": "4420" 00:12:26.840 }, 00:12:26.840 "peer_address": { 00:12:26.840 "trtype": "TCP", 00:12:26.840 "adrfam": "IPv4", 00:12:26.840 "traddr": "10.0.0.1", 00:12:26.840 "trsvcid": "39596" 00:12:26.840 }, 00:12:26.840 "auth": { 00:12:26.840 "state": "completed", 00:12:26.840 "digest": "sha512", 00:12:26.840 "dhgroup": "null" 00:12:26.840 } 00:12:26.840 } 00:12:26.840 ]' 00:12:26.841 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.841 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:26.841 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.841 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:26.841 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.841 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.841 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.841 07:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.409 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:27.409 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:27.975 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.975 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:27.975 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.975 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.975 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.975 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.976 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:27.976 07:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:28.234 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:28.234 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.234 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:28.234 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:28.234 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:28.234 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.234 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.234 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.234 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.234 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.234 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.234 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.234 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.492 00:12:28.492 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.492 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.492 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.749 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.749 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.749 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.749 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.749 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.749 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.749 { 00:12:28.749 "cntlid": 99, 00:12:28.749 "qid": 0, 00:12:28.749 "state": "enabled", 00:12:28.749 "thread": "nvmf_tgt_poll_group_000", 00:12:28.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:28.749 "listen_address": { 00:12:28.749 "trtype": "TCP", 00:12:28.749 "adrfam": "IPv4", 00:12:28.749 "traddr": "10.0.0.3", 00:12:28.749 "trsvcid": "4420" 00:12:28.749 }, 00:12:28.749 "peer_address": { 00:12:28.749 "trtype": "TCP", 00:12:28.749 "adrfam": "IPv4", 00:12:28.749 "traddr": "10.0.0.1", 00:12:28.749 "trsvcid": "39614" 00:12:28.749 }, 00:12:28.749 "auth": { 00:12:28.749 "state": "completed", 00:12:28.749 "digest": "sha512", 00:12:28.749 "dhgroup": "null" 00:12:28.749 } 00:12:28.749 } 00:12:28.749 ]' 00:12:28.749 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.008 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.008 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.008 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:29.008 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.008 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.008 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.008 07:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.267 07:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:12:29.267 07:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:12:29.834 07:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.834 07:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:29.834 07:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.834 07:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.834 07:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.834 07:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.835 07:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:29.835 07:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:30.400 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:30.400 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.400 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:30.400 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:30.400 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:30.400 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.400 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.400 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.400 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.400 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.400 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.400 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.400 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.658 00:12:30.658 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.658 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.658 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.917 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.917 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.917 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.917 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.917 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.917 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.917 { 00:12:30.917 "cntlid": 101, 00:12:30.917 "qid": 0, 00:12:30.917 "state": "enabled", 00:12:30.917 "thread": "nvmf_tgt_poll_group_000", 00:12:30.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:30.917 "listen_address": { 00:12:30.917 "trtype": "TCP", 00:12:30.917 "adrfam": "IPv4", 00:12:30.917 "traddr": "10.0.0.3", 00:12:30.917 "trsvcid": "4420" 00:12:30.917 }, 00:12:30.917 "peer_address": { 00:12:30.917 "trtype": "TCP", 00:12:30.917 "adrfam": "IPv4", 00:12:30.917 "traddr": "10.0.0.1", 00:12:30.917 "trsvcid": "39640" 00:12:30.917 }, 00:12:30.917 "auth": { 00:12:30.917 "state": "completed", 00:12:30.917 "digest": "sha512", 00:12:30.917 "dhgroup": "null" 00:12:30.917 } 00:12:30.917 } 00:12:30.917 ]' 00:12:30.917 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.917 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:30.917 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.176 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:31.177 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.177 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.177 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.177 07:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.435 07:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:12:31.435 07:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:12:32.003 07:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.003 07:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:32.003 07:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.003 07:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.003 07:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.003 07:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.003 07:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:32.003 07:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:32.260 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:32.260 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.260 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:32.260 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:32.260 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:32.261 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.261 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:12:32.261 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.261 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.261 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.261 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:32.261 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.261 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.519 00:12:32.519 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.519 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.519 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.784 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.784 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.784 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.784 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.784 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.784 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.784 { 00:12:32.784 "cntlid": 103, 00:12:32.784 "qid": 0, 00:12:32.784 "state": "enabled", 00:12:32.784 "thread": "nvmf_tgt_poll_group_000", 00:12:32.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:32.784 "listen_address": { 00:12:32.784 "trtype": "TCP", 00:12:32.784 "adrfam": "IPv4", 00:12:32.784 "traddr": "10.0.0.3", 00:12:32.784 "trsvcid": "4420" 00:12:32.784 }, 00:12:32.784 "peer_address": { 00:12:32.784 "trtype": "TCP", 00:12:32.784 "adrfam": "IPv4", 00:12:32.784 "traddr": "10.0.0.1", 00:12:32.784 "trsvcid": "39662" 00:12:32.784 }, 00:12:32.784 "auth": { 00:12:32.784 "state": "completed", 00:12:32.784 "digest": "sha512", 00:12:32.784 "dhgroup": "null" 00:12:32.784 } 00:12:32.784 } 00:12:32.784 ]' 00:12:32.784 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.784 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:32.784 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.784 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:32.784 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.043 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.043 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.043 07:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.302 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:12:33.302 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:12:33.870 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.870 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:33.870 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.870 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.870 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.870 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:33.870 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.870 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:33.870 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:34.129 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:34.129 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.129 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:34.129 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:34.129 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:34.129 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.129 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.129 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.129 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.129 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.129 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.129 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.129 07:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.388 00:12:34.388 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.388 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.388 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.647 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.647 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.647 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.647 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.647 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.647 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.647 { 00:12:34.647 "cntlid": 105, 00:12:34.647 "qid": 0, 00:12:34.647 "state": "enabled", 00:12:34.647 "thread": "nvmf_tgt_poll_group_000", 00:12:34.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:34.647 "listen_address": { 00:12:34.647 "trtype": "TCP", 00:12:34.647 "adrfam": "IPv4", 00:12:34.647 "traddr": "10.0.0.3", 00:12:34.647 "trsvcid": "4420" 00:12:34.647 }, 00:12:34.647 "peer_address": { 00:12:34.647 "trtype": "TCP", 00:12:34.647 "adrfam": "IPv4", 00:12:34.647 "traddr": "10.0.0.1", 00:12:34.647 "trsvcid": "39678" 00:12:34.647 }, 00:12:34.647 "auth": { 00:12:34.647 "state": "completed", 00:12:34.647 "digest": "sha512", 00:12:34.647 "dhgroup": "ffdhe2048" 00:12:34.647 } 00:12:34.647 } 00:12:34.647 ]' 00:12:34.647 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.647 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.647 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.647 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:34.647 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.648 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.648 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.648 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.907 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:34.907 07:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:35.844 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.844 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:35.844 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.844 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.844 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.844 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.844 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:35.844 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:35.845 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:35.845 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.845 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:35.845 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:35.845 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:35.845 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.845 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.845 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.845 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.845 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.845 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.845 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.845 07:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.413 00:12:36.413 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.413 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.413 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.672 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.672 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.672 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.672 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.672 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.672 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.672 { 00:12:36.672 "cntlid": 107, 00:12:36.672 "qid": 0, 00:12:36.672 "state": "enabled", 00:12:36.672 "thread": "nvmf_tgt_poll_group_000", 00:12:36.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:36.672 "listen_address": { 00:12:36.672 "trtype": "TCP", 00:12:36.672 "adrfam": "IPv4", 00:12:36.672 "traddr": "10.0.0.3", 00:12:36.672 "trsvcid": "4420" 00:12:36.672 }, 00:12:36.672 "peer_address": { 00:12:36.672 "trtype": "TCP", 00:12:36.672 "adrfam": "IPv4", 00:12:36.672 "traddr": "10.0.0.1", 00:12:36.672 "trsvcid": "39704" 00:12:36.672 }, 00:12:36.672 "auth": { 00:12:36.672 "state": "completed", 00:12:36.672 "digest": "sha512", 00:12:36.672 "dhgroup": "ffdhe2048" 00:12:36.672 } 00:12:36.672 } 00:12:36.672 ]' 00:12:36.672 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.672 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.672 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.672 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:36.672 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.672 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.672 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.672 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.932 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:12:36.932 07:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.869 07:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.436 00:12:38.436 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.436 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.436 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.436 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.436 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.436 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.436 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.436 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.436 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.436 { 00:12:38.436 "cntlid": 109, 00:12:38.436 "qid": 0, 00:12:38.436 "state": "enabled", 00:12:38.436 "thread": "nvmf_tgt_poll_group_000", 00:12:38.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:38.436 "listen_address": { 00:12:38.436 "trtype": "TCP", 00:12:38.436 "adrfam": "IPv4", 00:12:38.436 "traddr": "10.0.0.3", 00:12:38.436 "trsvcid": "4420" 00:12:38.436 }, 00:12:38.436 "peer_address": { 00:12:38.436 "trtype": "TCP", 00:12:38.436 "adrfam": "IPv4", 00:12:38.436 "traddr": "10.0.0.1", 00:12:38.436 "trsvcid": "45064" 00:12:38.436 }, 00:12:38.436 "auth": { 00:12:38.436 "state": "completed", 00:12:38.436 "digest": "sha512", 00:12:38.436 "dhgroup": "ffdhe2048" 00:12:38.436 } 00:12:38.436 } 00:12:38.436 ]' 00:12:38.436 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.695 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.695 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.695 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:38.695 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.695 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.695 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.695 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.954 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:12:38.954 07:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:12:39.520 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.520 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:39.520 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.520 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.520 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.520 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.520 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:39.520 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:39.779 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:39.779 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.779 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:39.779 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:39.779 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:39.779 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.779 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:12:39.779 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.779 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.779 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.779 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:39.779 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.779 07:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:40.038 00:12:40.296 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.296 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.296 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.563 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.563 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.563 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.563 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.563 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.563 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.563 { 00:12:40.563 "cntlid": 111, 00:12:40.563 "qid": 0, 00:12:40.563 "state": "enabled", 00:12:40.563 "thread": "nvmf_tgt_poll_group_000", 00:12:40.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:40.563 "listen_address": { 00:12:40.563 "trtype": "TCP", 00:12:40.563 "adrfam": "IPv4", 00:12:40.563 "traddr": "10.0.0.3", 00:12:40.563 "trsvcid": "4420" 00:12:40.563 }, 00:12:40.563 "peer_address": { 00:12:40.563 "trtype": "TCP", 00:12:40.563 "adrfam": "IPv4", 00:12:40.563 "traddr": "10.0.0.1", 00:12:40.563 "trsvcid": "45098" 00:12:40.563 }, 00:12:40.563 "auth": { 00:12:40.563 "state": "completed", 00:12:40.563 "digest": "sha512", 00:12:40.563 "dhgroup": "ffdhe2048" 00:12:40.563 } 00:12:40.563 } 00:12:40.563 ]' 00:12:40.563 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.563 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.563 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.563 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:40.563 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.563 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.563 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.563 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.834 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:12:40.834 07:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:12:41.403 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.662 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:41.662 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.662 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.662 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.662 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:41.662 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.662 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:41.662 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:41.920 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:41.920 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.920 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:41.920 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:41.920 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:41.920 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.920 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.920 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.920 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.920 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.920 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.920 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.920 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.179 00:12:42.179 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.179 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.179 07:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.438 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.438 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.438 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.438 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.438 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.438 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.438 { 00:12:42.438 "cntlid": 113, 00:12:42.438 "qid": 0, 00:12:42.438 "state": "enabled", 00:12:42.438 "thread": "nvmf_tgt_poll_group_000", 00:12:42.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:42.438 "listen_address": { 00:12:42.438 "trtype": "TCP", 00:12:42.438 "adrfam": "IPv4", 00:12:42.438 "traddr": "10.0.0.3", 00:12:42.438 "trsvcid": "4420" 00:12:42.438 }, 00:12:42.438 "peer_address": { 00:12:42.438 "trtype": "TCP", 00:12:42.438 "adrfam": "IPv4", 00:12:42.438 "traddr": "10.0.0.1", 00:12:42.438 "trsvcid": "45120" 00:12:42.438 }, 00:12:42.438 "auth": { 00:12:42.438 "state": "completed", 00:12:42.438 "digest": "sha512", 00:12:42.438 "dhgroup": "ffdhe3072" 00:12:42.438 } 00:12:42.438 } 00:12:42.438 ]' 00:12:42.438 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.438 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.438 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.438 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:42.438 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.438 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.438 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.438 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.696 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:42.696 07:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:43.284 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.284 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:43.284 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.284 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.284 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.284 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.284 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:43.284 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:43.548 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:12:43.548 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.548 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:43.548 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:43.548 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:43.548 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.548 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.548 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.548 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.548 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.548 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.548 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.548 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.806 00:12:43.806 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.806 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.806 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.064 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.064 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.064 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.064 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.064 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.064 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.064 { 00:12:44.064 "cntlid": 115, 00:12:44.064 "qid": 0, 00:12:44.064 "state": "enabled", 00:12:44.064 "thread": "nvmf_tgt_poll_group_000", 00:12:44.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:44.064 "listen_address": { 00:12:44.064 "trtype": "TCP", 00:12:44.064 "adrfam": "IPv4", 00:12:44.064 "traddr": "10.0.0.3", 00:12:44.064 "trsvcid": "4420" 00:12:44.064 }, 00:12:44.064 "peer_address": { 00:12:44.064 "trtype": "TCP", 00:12:44.064 "adrfam": "IPv4", 00:12:44.064 "traddr": "10.0.0.1", 00:12:44.064 "trsvcid": "45146" 00:12:44.064 }, 00:12:44.064 "auth": { 00:12:44.064 "state": "completed", 00:12:44.064 "digest": "sha512", 00:12:44.064 "dhgroup": "ffdhe3072" 00:12:44.064 } 00:12:44.064 } 00:12:44.064 ]' 00:12:44.064 07:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.064 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:44.064 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.323 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:44.323 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.323 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.323 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.324 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.324 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:12:44.324 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:12:44.891 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.891 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:44.891 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.891 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.891 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.891 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.891 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:44.891 07:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:45.459 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:12:45.459 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.459 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:45.459 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:45.459 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:45.459 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.459 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.459 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.459 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.459 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.459 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.459 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.459 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.718 00:12:45.718 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.718 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.718 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.977 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.977 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.978 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.978 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.978 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.978 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.978 { 00:12:45.978 "cntlid": 117, 00:12:45.978 "qid": 0, 00:12:45.978 "state": "enabled", 00:12:45.978 "thread": "nvmf_tgt_poll_group_000", 00:12:45.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:45.978 "listen_address": { 00:12:45.978 "trtype": "TCP", 00:12:45.978 "adrfam": "IPv4", 00:12:45.978 "traddr": "10.0.0.3", 00:12:45.978 "trsvcid": "4420" 00:12:45.978 }, 00:12:45.978 "peer_address": { 00:12:45.978 "trtype": "TCP", 00:12:45.978 "adrfam": "IPv4", 00:12:45.978 "traddr": "10.0.0.1", 00:12:45.978 "trsvcid": "45180" 00:12:45.978 }, 00:12:45.978 "auth": { 00:12:45.978 "state": "completed", 00:12:45.978 "digest": "sha512", 00:12:45.978 "dhgroup": "ffdhe3072" 00:12:45.978 } 00:12:45.978 } 00:12:45.978 ]' 00:12:45.978 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.978 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.978 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.978 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:45.978 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.236 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.236 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.236 07:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.236 07:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:12:46.236 07:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:12:47.172 07:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.172 07:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:47.172 07:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.172 07:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.172 07:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.172 07:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.172 07:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:47.172 07:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:47.172 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:12:47.172 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.172 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:47.172 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:47.172 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:47.172 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.172 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:12:47.172 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.172 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.173 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.173 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:47.173 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.173 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.432 00:12:47.432 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.432 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.432 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.691 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.691 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.691 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.691 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.950 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.950 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.950 { 00:12:47.950 "cntlid": 119, 00:12:47.950 "qid": 0, 00:12:47.950 "state": "enabled", 00:12:47.950 "thread": "nvmf_tgt_poll_group_000", 00:12:47.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:47.950 "listen_address": { 00:12:47.950 "trtype": "TCP", 00:12:47.950 "adrfam": "IPv4", 00:12:47.950 "traddr": "10.0.0.3", 00:12:47.950 "trsvcid": "4420" 00:12:47.950 }, 00:12:47.950 "peer_address": { 00:12:47.950 "trtype": "TCP", 00:12:47.950 "adrfam": "IPv4", 00:12:47.950 "traddr": "10.0.0.1", 00:12:47.950 "trsvcid": "44616" 00:12:47.950 }, 00:12:47.950 "auth": { 00:12:47.950 "state": "completed", 00:12:47.950 "digest": "sha512", 00:12:47.950 "dhgroup": "ffdhe3072" 00:12:47.950 } 00:12:47.950 } 00:12:47.950 ]' 00:12:47.950 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.950 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.950 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.950 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:47.950 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.950 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.950 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.950 07:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.208 07:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:12:48.209 07:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:12:48.796 07:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.796 07:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:48.796 07:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.796 07:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.796 07:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.796 07:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:48.796 07:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.796 07:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:48.796 07:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:49.055 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:49.055 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.055 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:49.055 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:49.055 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:49.055 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.055 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.055 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.055 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.314 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.314 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.314 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.314 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.572 00:12:49.572 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.572 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.573 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.831 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.831 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.831 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.831 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.831 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.831 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.831 { 00:12:49.831 "cntlid": 121, 00:12:49.831 "qid": 0, 00:12:49.831 "state": "enabled", 00:12:49.831 "thread": "nvmf_tgt_poll_group_000", 00:12:49.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:49.831 "listen_address": { 00:12:49.831 "trtype": "TCP", 00:12:49.831 "adrfam": "IPv4", 00:12:49.831 "traddr": "10.0.0.3", 00:12:49.831 "trsvcid": "4420" 00:12:49.831 }, 00:12:49.831 "peer_address": { 00:12:49.831 "trtype": "TCP", 00:12:49.831 "adrfam": "IPv4", 00:12:49.831 "traddr": "10.0.0.1", 00:12:49.831 "trsvcid": "44644" 00:12:49.831 }, 00:12:49.831 "auth": { 00:12:49.831 "state": "completed", 00:12:49.831 "digest": "sha512", 00:12:49.831 "dhgroup": "ffdhe4096" 00:12:49.831 } 00:12:49.831 } 00:12:49.831 ]' 00:12:49.831 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.090 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:50.090 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.090 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:50.090 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.090 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.090 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.090 07:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.348 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:50.348 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:50.916 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.916 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:50.916 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.916 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.916 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.916 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.916 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:50.916 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:51.175 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:51.175 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.175 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:51.175 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:51.175 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:51.175 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.175 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.175 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.175 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.175 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.175 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.175 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.175 07:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.434 00:12:51.434 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.434 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.434 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.693 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.693 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.693 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.693 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.693 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.693 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.693 { 00:12:51.693 "cntlid": 123, 00:12:51.693 "qid": 0, 00:12:51.693 "state": "enabled", 00:12:51.693 "thread": "nvmf_tgt_poll_group_000", 00:12:51.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:51.693 "listen_address": { 00:12:51.693 "trtype": "TCP", 00:12:51.693 "adrfam": "IPv4", 00:12:51.693 "traddr": "10.0.0.3", 00:12:51.693 "trsvcid": "4420" 00:12:51.693 }, 00:12:51.693 "peer_address": { 00:12:51.693 "trtype": "TCP", 00:12:51.693 "adrfam": "IPv4", 00:12:51.693 "traddr": "10.0.0.1", 00:12:51.693 "trsvcid": "44676" 00:12:51.693 }, 00:12:51.693 "auth": { 00:12:51.693 "state": "completed", 00:12:51.693 "digest": "sha512", 00:12:51.693 "dhgroup": "ffdhe4096" 00:12:51.693 } 00:12:51.693 } 00:12:51.693 ]' 00:12:51.693 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.951 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.951 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.951 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:51.951 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.951 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.951 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.951 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.210 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:12:52.210 07:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:12:52.777 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.777 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:52.777 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.777 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.777 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.777 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.777 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:52.777 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:53.038 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:53.038 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.038 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:53.038 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:53.038 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:53.038 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.038 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.038 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.038 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.038 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.038 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.038 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.038 07:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.297 00:12:53.297 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.297 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.297 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.556 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.556 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.556 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.556 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.556 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.556 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.556 { 00:12:53.556 "cntlid": 125, 00:12:53.556 "qid": 0, 00:12:53.556 "state": "enabled", 00:12:53.556 "thread": "nvmf_tgt_poll_group_000", 00:12:53.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:53.556 "listen_address": { 00:12:53.556 "trtype": "TCP", 00:12:53.556 "adrfam": "IPv4", 00:12:53.556 "traddr": "10.0.0.3", 00:12:53.556 "trsvcid": "4420" 00:12:53.556 }, 00:12:53.556 "peer_address": { 00:12:53.556 "trtype": "TCP", 00:12:53.556 "adrfam": "IPv4", 00:12:53.556 "traddr": "10.0.0.1", 00:12:53.556 "trsvcid": "44714" 00:12:53.556 }, 00:12:53.556 "auth": { 00:12:53.556 "state": "completed", 00:12:53.556 "digest": "sha512", 00:12:53.556 "dhgroup": "ffdhe4096" 00:12:53.556 } 00:12:53.556 } 00:12:53.556 ]' 00:12:53.556 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.556 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.556 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.814 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:53.814 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.814 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.814 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.814 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.072 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:12:54.072 07:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:12:54.639 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.639 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:54.639 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.639 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.639 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.639 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.639 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:54.639 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:54.897 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:54.897 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.897 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:54.897 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:54.897 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:54.897 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.897 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:12:54.897 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.897 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.897 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.897 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:54.898 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.898 07:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:55.156 00:12:55.415 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.415 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.415 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.415 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.415 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.415 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.415 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.415 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.415 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.415 { 00:12:55.415 "cntlid": 127, 00:12:55.415 "qid": 0, 00:12:55.415 "state": "enabled", 00:12:55.415 "thread": "nvmf_tgt_poll_group_000", 00:12:55.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:55.415 "listen_address": { 00:12:55.415 "trtype": "TCP", 00:12:55.415 "adrfam": "IPv4", 00:12:55.415 "traddr": "10.0.0.3", 00:12:55.415 "trsvcid": "4420" 00:12:55.415 }, 00:12:55.415 "peer_address": { 00:12:55.415 "trtype": "TCP", 00:12:55.415 "adrfam": "IPv4", 00:12:55.415 "traddr": "10.0.0.1", 00:12:55.415 "trsvcid": "44738" 00:12:55.415 }, 00:12:55.415 "auth": { 00:12:55.415 "state": "completed", 00:12:55.415 "digest": "sha512", 00:12:55.416 "dhgroup": "ffdhe4096" 00:12:55.416 } 00:12:55.416 } 00:12:55.416 ]' 00:12:55.416 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.674 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.674 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.674 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:55.674 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.674 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.674 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.674 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.932 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:12:55.932 07:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:12:56.507 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.507 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:56.507 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.507 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.507 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.507 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:56.507 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.507 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:56.507 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:56.767 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:56.767 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.767 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:56.767 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:56.767 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:56.767 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.767 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.767 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.767 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.767 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.767 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.767 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.767 07:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.334 00:12:57.334 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.334 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.334 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.592 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.592 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.592 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.592 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.592 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.592 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.592 { 00:12:57.592 "cntlid": 129, 00:12:57.592 "qid": 0, 00:12:57.592 "state": "enabled", 00:12:57.592 "thread": "nvmf_tgt_poll_group_000", 00:12:57.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:12:57.592 "listen_address": { 00:12:57.592 "trtype": "TCP", 00:12:57.592 "adrfam": "IPv4", 00:12:57.592 "traddr": "10.0.0.3", 00:12:57.592 "trsvcid": "4420" 00:12:57.592 }, 00:12:57.592 "peer_address": { 00:12:57.592 "trtype": "TCP", 00:12:57.592 "adrfam": "IPv4", 00:12:57.592 "traddr": "10.0.0.1", 00:12:57.592 "trsvcid": "59942" 00:12:57.592 }, 00:12:57.592 "auth": { 00:12:57.592 "state": "completed", 00:12:57.592 "digest": "sha512", 00:12:57.592 "dhgroup": "ffdhe6144" 00:12:57.592 } 00:12:57.592 } 00:12:57.592 ]' 00:12:57.592 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.592 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.592 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.592 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:57.592 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.592 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.592 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.592 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.159 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:58.159 07:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:12:58.725 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.725 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:12:58.725 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.725 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.725 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.725 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.725 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:58.725 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:58.984 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:58.984 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.984 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:58.984 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:58.984 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:58.984 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.984 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.984 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.984 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.242 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.242 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.242 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.242 07:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.501 00:12:59.760 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.760 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.760 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.019 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.019 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.019 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.019 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.019 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.019 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.019 { 00:13:00.019 "cntlid": 131, 00:13:00.019 "qid": 0, 00:13:00.019 "state": "enabled", 00:13:00.019 "thread": "nvmf_tgt_poll_group_000", 00:13:00.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:00.019 "listen_address": { 00:13:00.019 "trtype": "TCP", 00:13:00.019 "adrfam": "IPv4", 00:13:00.019 "traddr": "10.0.0.3", 00:13:00.019 "trsvcid": "4420" 00:13:00.019 }, 00:13:00.019 "peer_address": { 00:13:00.019 "trtype": "TCP", 00:13:00.019 "adrfam": "IPv4", 00:13:00.019 "traddr": "10.0.0.1", 00:13:00.019 "trsvcid": "59986" 00:13:00.019 }, 00:13:00.019 "auth": { 00:13:00.019 "state": "completed", 00:13:00.019 "digest": "sha512", 00:13:00.019 "dhgroup": "ffdhe6144" 00:13:00.019 } 00:13:00.019 } 00:13:00.019 ]' 00:13:00.019 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.019 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.019 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.019 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:00.019 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.019 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.019 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.019 07:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.586 07:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:13:00.586 07:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:13:01.153 07:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.153 07:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:01.153 07:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.153 07:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.153 07:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.153 07:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.153 07:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:01.153 07:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:01.412 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:01.412 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.412 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:01.412 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:01.412 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:01.412 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.412 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.412 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.412 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.412 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.412 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.412 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.412 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.978 00:13:01.978 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.978 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.978 07:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.237 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.237 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.237 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.237 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.237 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.237 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.237 { 00:13:02.237 "cntlid": 133, 00:13:02.237 "qid": 0, 00:13:02.237 "state": "enabled", 00:13:02.237 "thread": "nvmf_tgt_poll_group_000", 00:13:02.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:02.237 "listen_address": { 00:13:02.237 "trtype": "TCP", 00:13:02.237 "adrfam": "IPv4", 00:13:02.237 "traddr": "10.0.0.3", 00:13:02.237 "trsvcid": "4420" 00:13:02.237 }, 00:13:02.237 "peer_address": { 00:13:02.237 "trtype": "TCP", 00:13:02.237 "adrfam": "IPv4", 00:13:02.237 "traddr": "10.0.0.1", 00:13:02.237 "trsvcid": "60014" 00:13:02.237 }, 00:13:02.237 "auth": { 00:13:02.237 "state": "completed", 00:13:02.237 "digest": "sha512", 00:13:02.237 "dhgroup": "ffdhe6144" 00:13:02.237 } 00:13:02.237 } 00:13:02.237 ]' 00:13:02.237 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.237 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.237 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.496 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:02.496 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.496 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.496 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.496 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.754 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:13:02.754 07:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:13:03.323 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.323 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:03.323 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.323 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.323 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.323 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.323 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:03.323 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:03.581 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:03.581 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.581 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.581 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:03.581 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:03.581 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.581 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:13:03.581 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.581 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.581 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.581 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:03.581 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:03.581 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:04.169 00:13:04.169 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.169 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.169 07:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.169 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.169 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.169 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.169 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.169 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.169 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.169 { 00:13:04.169 "cntlid": 135, 00:13:04.169 "qid": 0, 00:13:04.169 "state": "enabled", 00:13:04.169 "thread": "nvmf_tgt_poll_group_000", 00:13:04.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:04.169 "listen_address": { 00:13:04.169 "trtype": "TCP", 00:13:04.169 "adrfam": "IPv4", 00:13:04.169 "traddr": "10.0.0.3", 00:13:04.169 "trsvcid": "4420" 00:13:04.169 }, 00:13:04.170 "peer_address": { 00:13:04.170 "trtype": "TCP", 00:13:04.170 "adrfam": "IPv4", 00:13:04.170 "traddr": "10.0.0.1", 00:13:04.170 "trsvcid": "60046" 00:13:04.170 }, 00:13:04.170 "auth": { 00:13:04.170 "state": "completed", 00:13:04.170 "digest": "sha512", 00:13:04.170 "dhgroup": "ffdhe6144" 00:13:04.170 } 00:13:04.170 } 00:13:04.170 ]' 00:13:04.170 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.170 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.170 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.429 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:04.429 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.429 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.429 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.429 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.688 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:13:04.688 07:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:13:05.255 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.255 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:05.255 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.255 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.256 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.256 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:05.256 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.256 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:05.256 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:05.822 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:05.822 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.822 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:05.822 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:05.822 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:05.822 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.822 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.822 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.822 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.822 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.822 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.822 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.822 07:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.388 00:13:06.388 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.388 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.388 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.648 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.648 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.648 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.648 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.648 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.648 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.648 { 00:13:06.648 "cntlid": 137, 00:13:06.648 "qid": 0, 00:13:06.648 "state": "enabled", 00:13:06.648 "thread": "nvmf_tgt_poll_group_000", 00:13:06.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:06.648 "listen_address": { 00:13:06.648 "trtype": "TCP", 00:13:06.648 "adrfam": "IPv4", 00:13:06.648 "traddr": "10.0.0.3", 00:13:06.648 "trsvcid": "4420" 00:13:06.648 }, 00:13:06.648 "peer_address": { 00:13:06.648 "trtype": "TCP", 00:13:06.648 "adrfam": "IPv4", 00:13:06.648 "traddr": "10.0.0.1", 00:13:06.648 "trsvcid": "60078" 00:13:06.648 }, 00:13:06.648 "auth": { 00:13:06.648 "state": "completed", 00:13:06.648 "digest": "sha512", 00:13:06.648 "dhgroup": "ffdhe8192" 00:13:06.648 } 00:13:06.648 } 00:13:06.648 ]' 00:13:06.648 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.648 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.648 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.648 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:06.648 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.648 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.648 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.648 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.215 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:13:07.215 07:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:13:07.783 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.783 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:07.783 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.783 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.783 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.783 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.783 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:07.783 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:08.042 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:08.042 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.042 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:08.042 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:08.042 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:08.042 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.042 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.042 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.042 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.042 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.042 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.042 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.042 07:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.608 00:13:08.608 07:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.608 07:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.608 07:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.176 07:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.176 07:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.176 07:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.176 07:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.176 07:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.176 07:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.176 { 00:13:09.176 "cntlid": 139, 00:13:09.176 "qid": 0, 00:13:09.176 "state": "enabled", 00:13:09.176 "thread": "nvmf_tgt_poll_group_000", 00:13:09.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:09.176 "listen_address": { 00:13:09.176 "trtype": "TCP", 00:13:09.176 "adrfam": "IPv4", 00:13:09.176 "traddr": "10.0.0.3", 00:13:09.176 "trsvcid": "4420" 00:13:09.176 }, 00:13:09.176 "peer_address": { 00:13:09.176 "trtype": "TCP", 00:13:09.176 "adrfam": "IPv4", 00:13:09.176 "traddr": "10.0.0.1", 00:13:09.176 "trsvcid": "37056" 00:13:09.176 }, 00:13:09.176 "auth": { 00:13:09.176 "state": "completed", 00:13:09.176 "digest": "sha512", 00:13:09.176 "dhgroup": "ffdhe8192" 00:13:09.176 } 00:13:09.176 } 00:13:09.176 ]' 00:13:09.176 07:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.176 07:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:09.176 07:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.176 07:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:09.176 07:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.176 07:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.176 07:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.176 07:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.434 07:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:13:09.434 07:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: --dhchap-ctrl-secret DHHC-1:02:ZjNhYTZkNTZlMTgzOGJkNGNkOGU5MzNlMDA0OTUwODhmMzljNDlkNGRlNjU1ZDBk3QcUag==: 00:13:10.371 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.371 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:10.371 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.371 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.371 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.371 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.371 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:10.371 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:10.630 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:10.630 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.630 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:10.630 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:10.630 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:10.630 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.630 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.630 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.630 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.630 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.630 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.630 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.630 07:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:11.199 00:13:11.199 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.199 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.199 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.459 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.459 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.459 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.459 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.459 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.459 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.459 { 00:13:11.459 "cntlid": 141, 00:13:11.459 "qid": 0, 00:13:11.459 "state": "enabled", 00:13:11.459 "thread": "nvmf_tgt_poll_group_000", 00:13:11.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:11.459 "listen_address": { 00:13:11.459 "trtype": "TCP", 00:13:11.459 "adrfam": "IPv4", 00:13:11.459 "traddr": "10.0.0.3", 00:13:11.459 "trsvcid": "4420" 00:13:11.459 }, 00:13:11.459 "peer_address": { 00:13:11.459 "trtype": "TCP", 00:13:11.459 "adrfam": "IPv4", 00:13:11.459 "traddr": "10.0.0.1", 00:13:11.459 "trsvcid": "37078" 00:13:11.459 }, 00:13:11.459 "auth": { 00:13:11.459 "state": "completed", 00:13:11.459 "digest": "sha512", 00:13:11.459 "dhgroup": "ffdhe8192" 00:13:11.459 } 00:13:11.459 } 00:13:11.459 ]' 00:13:11.459 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.459 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.459 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.459 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:11.459 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.459 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.459 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.459 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.034 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:13:12.034 07:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:01:ZjhjYTQzOWJjMmJkNTBlZDllNWRjOTc2MWExMWM0NTN2H2xa: 00:13:12.606 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.606 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:12.606 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.606 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.606 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.606 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.606 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:12.606 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:12.864 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:12.864 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.864 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:12.864 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:12.864 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:12.864 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.864 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:13:12.864 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.864 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.864 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.864 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:12.864 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:12.864 07:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:13.430 00:13:13.430 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.430 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.430 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.999 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.999 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.999 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.999 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.999 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.999 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.999 { 00:13:13.999 "cntlid": 143, 00:13:13.999 "qid": 0, 00:13:13.999 "state": "enabled", 00:13:13.999 "thread": "nvmf_tgt_poll_group_000", 00:13:13.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:13.999 "listen_address": { 00:13:13.999 "trtype": "TCP", 00:13:13.999 "adrfam": "IPv4", 00:13:13.999 "traddr": "10.0.0.3", 00:13:13.999 "trsvcid": "4420" 00:13:13.999 }, 00:13:13.999 "peer_address": { 00:13:13.999 "trtype": "TCP", 00:13:13.999 "adrfam": "IPv4", 00:13:13.999 "traddr": "10.0.0.1", 00:13:13.999 "trsvcid": "37104" 00:13:13.999 }, 00:13:13.999 "auth": { 00:13:13.999 "state": "completed", 00:13:13.999 "digest": "sha512", 00:13:13.999 "dhgroup": "ffdhe8192" 00:13:13.999 } 00:13:13.999 } 00:13:13.999 ]' 00:13:13.999 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.999 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.999 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.999 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:13.999 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.999 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.999 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.999 07:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.257 07:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:13:14.257 07:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:13:15.192 07:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.192 07:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:15.192 07:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.192 07:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 07:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.192 07:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:15.192 07:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:15.192 07:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:15.192 07:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:15.192 07:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:15.192 07:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:15.192 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:15.192 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.192 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:15.192 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:15.192 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:15.192 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.192 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.192 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.192 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.192 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.192 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.192 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:15.758 00:13:15.758 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.758 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.758 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.017 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.017 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.017 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.017 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.017 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.017 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.017 { 00:13:16.017 "cntlid": 145, 00:13:16.017 "qid": 0, 00:13:16.017 "state": "enabled", 00:13:16.017 "thread": "nvmf_tgt_poll_group_000", 00:13:16.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:16.017 "listen_address": { 00:13:16.017 "trtype": "TCP", 00:13:16.017 "adrfam": "IPv4", 00:13:16.017 "traddr": "10.0.0.3", 00:13:16.017 "trsvcid": "4420" 00:13:16.017 }, 00:13:16.017 "peer_address": { 00:13:16.017 "trtype": "TCP", 00:13:16.017 "adrfam": "IPv4", 00:13:16.017 "traddr": "10.0.0.1", 00:13:16.017 "trsvcid": "37118" 00:13:16.017 }, 00:13:16.017 "auth": { 00:13:16.017 "state": "completed", 00:13:16.017 "digest": "sha512", 00:13:16.017 "dhgroup": "ffdhe8192" 00:13:16.017 } 00:13:16.017 } 00:13:16.017 ]' 00:13:16.017 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.276 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:16.276 07:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.276 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:16.276 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.276 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.276 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.276 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.535 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:13:16.535 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:00:ZTgyZTVhZDM4ZmNmNjdmOTUxYjQ2NDc5OGVjMDc3NDQ1ZjA4OGI3MDVmZjlkYWNiZsU88Q==: --dhchap-ctrl-secret DHHC-1:03:Yzg0ZWZmOWNmYzkxODk3YTAxMjJiYjM3NTA4Yzc5ODg0Nzk1OTkwMWEwY2NiMGVkYTg4NWI1ZWVmYTNkOTdkOSb5niM=: 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:17.102 07:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:17.669 request: 00:13:17.669 { 00:13:17.669 "name": "nvme0", 00:13:17.669 "trtype": "tcp", 00:13:17.669 "traddr": "10.0.0.3", 00:13:17.669 "adrfam": "ipv4", 00:13:17.669 "trsvcid": "4420", 00:13:17.669 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:17.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:17.669 "prchk_reftag": false, 00:13:17.669 "prchk_guard": false, 00:13:17.669 "hdgst": false, 00:13:17.669 "ddgst": false, 00:13:17.669 "dhchap_key": "key2", 00:13:17.669 "allow_unrecognized_csi": false, 00:13:17.669 "method": "bdev_nvme_attach_controller", 00:13:17.669 "req_id": 1 00:13:17.669 } 00:13:17.669 Got JSON-RPC error response 00:13:17.669 response: 00:13:17.669 { 00:13:17.669 "code": -5, 00:13:17.669 "message": "Input/output error" 00:13:17.669 } 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:17.669 07:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:18.237 request: 00:13:18.237 { 00:13:18.237 "name": "nvme0", 00:13:18.237 "trtype": "tcp", 00:13:18.237 "traddr": "10.0.0.3", 00:13:18.237 "adrfam": "ipv4", 00:13:18.237 "trsvcid": "4420", 00:13:18.237 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:18.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:18.237 "prchk_reftag": false, 00:13:18.237 "prchk_guard": false, 00:13:18.237 "hdgst": false, 00:13:18.237 "ddgst": false, 00:13:18.237 "dhchap_key": "key1", 00:13:18.237 "dhchap_ctrlr_key": "ckey2", 00:13:18.237 "allow_unrecognized_csi": false, 00:13:18.237 "method": "bdev_nvme_attach_controller", 00:13:18.237 "req_id": 1 00:13:18.237 } 00:13:18.237 Got JSON-RPC error response 00:13:18.237 response: 00:13:18.237 { 00:13:18.237 "code": -5, 00:13:18.237 "message": "Input/output error" 00:13:18.237 } 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.237 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.804 request: 00:13:18.804 { 00:13:18.804 "name": "nvme0", 00:13:18.804 "trtype": "tcp", 00:13:18.804 "traddr": "10.0.0.3", 00:13:18.804 "adrfam": "ipv4", 00:13:18.804 "trsvcid": "4420", 00:13:18.804 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:18.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:18.804 "prchk_reftag": false, 00:13:18.804 "prchk_guard": false, 00:13:18.804 "hdgst": false, 00:13:18.804 "ddgst": false, 00:13:18.804 "dhchap_key": "key1", 00:13:18.804 "dhchap_ctrlr_key": "ckey1", 00:13:18.804 "allow_unrecognized_csi": false, 00:13:18.804 "method": "bdev_nvme_attach_controller", 00:13:18.804 "req_id": 1 00:13:18.804 } 00:13:18.804 Got JSON-RPC error response 00:13:18.804 response: 00:13:18.804 { 00:13:18.804 "code": -5, 00:13:18.804 "message": "Input/output error" 00:13:18.804 } 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67025 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 67025 ']' 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 67025 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67025 00:13:18.804 killing process with pid 67025 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67025' 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 67025 00:13:18.804 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 67025 00:13:19.063 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:19.063 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:19.063 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:19.063 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.063 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70039 00:13:19.063 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:19.063 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70039 00:13:19.063 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70039 ']' 00:13:19.063 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.063 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:19.063 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.063 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:19.063 07:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.321 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:19.321 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:19.321 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:19.321 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:19.321 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.321 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.321 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:19.321 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70039 00:13:19.321 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70039 ']' 00:13:19.321 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.321 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:19.321 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.321 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:19.321 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.579 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:19.579 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:19.579 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:19.579 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.579 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.579 null0 00:13:19.579 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.579 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:19.579 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5I3 00:13:19.579 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.579 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.DJV ]] 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DJV 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5UK 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.TDD ]] 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.TDD 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Kwk 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.DIj ]] 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DIj 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.f9H 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:19.839 07:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:20.775 nvme0n1 00:13:20.775 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.775 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.775 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.034 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.034 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.034 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.034 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.034 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.034 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.034 { 00:13:21.034 "cntlid": 1, 00:13:21.034 "qid": 0, 00:13:21.034 "state": "enabled", 00:13:21.034 "thread": "nvmf_tgt_poll_group_000", 00:13:21.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:21.034 "listen_address": { 00:13:21.034 "trtype": "TCP", 00:13:21.034 "adrfam": "IPv4", 00:13:21.034 "traddr": "10.0.0.3", 00:13:21.034 "trsvcid": "4420" 00:13:21.034 }, 00:13:21.034 "peer_address": { 00:13:21.034 "trtype": "TCP", 00:13:21.034 "adrfam": "IPv4", 00:13:21.034 "traddr": "10.0.0.1", 00:13:21.034 "trsvcid": "38458" 00:13:21.034 }, 00:13:21.034 "auth": { 00:13:21.034 "state": "completed", 00:13:21.034 "digest": "sha512", 00:13:21.034 "dhgroup": "ffdhe8192" 00:13:21.034 } 00:13:21.034 } 00:13:21.034 ]' 00:13:21.034 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.034 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.034 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.034 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:21.034 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.034 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.034 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.034 07:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.602 07:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:13:21.602 07:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:13:22.172 07:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.172 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:22.172 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.172 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.172 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.172 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key3 00:13:22.172 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.172 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.172 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.172 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:22.172 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:22.430 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:22.430 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:22.430 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:22.430 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:22.431 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.431 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:22.431 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.431 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:22.431 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.431 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.998 request: 00:13:22.998 { 00:13:22.998 "name": "nvme0", 00:13:22.998 "trtype": "tcp", 00:13:22.998 "traddr": "10.0.0.3", 00:13:22.998 "adrfam": "ipv4", 00:13:22.998 "trsvcid": "4420", 00:13:22.998 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:22.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:22.998 "prchk_reftag": false, 00:13:22.998 "prchk_guard": false, 00:13:22.998 "hdgst": false, 00:13:22.998 "ddgst": false, 00:13:22.998 "dhchap_key": "key3", 00:13:22.998 "allow_unrecognized_csi": false, 00:13:22.998 "method": "bdev_nvme_attach_controller", 00:13:22.998 "req_id": 1 00:13:22.998 } 00:13:22.998 Got JSON-RPC error response 00:13:22.998 response: 00:13:22.998 { 00:13:22.998 "code": -5, 00:13:22.998 "message": "Input/output error" 00:13:22.998 } 00:13:22.998 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:22.998 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:22.998 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:22.998 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:22.998 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:22.998 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:22.999 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:22.999 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:22.999 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:22.999 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:22.999 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:22.999 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:22.999 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.999 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:22.999 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.999 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:22.999 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.999 07:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:23.257 request: 00:13:23.257 { 00:13:23.257 "name": "nvme0", 00:13:23.257 "trtype": "tcp", 00:13:23.257 "traddr": "10.0.0.3", 00:13:23.257 "adrfam": "ipv4", 00:13:23.257 "trsvcid": "4420", 00:13:23.257 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:23.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:23.257 "prchk_reftag": false, 00:13:23.257 "prchk_guard": false, 00:13:23.257 "hdgst": false, 00:13:23.257 "ddgst": false, 00:13:23.257 "dhchap_key": "key3", 00:13:23.257 "allow_unrecognized_csi": false, 00:13:23.257 "method": "bdev_nvme_attach_controller", 00:13:23.257 "req_id": 1 00:13:23.257 } 00:13:23.257 Got JSON-RPC error response 00:13:23.257 response: 00:13:23.257 { 00:13:23.257 "code": -5, 00:13:23.257 "message": "Input/output error" 00:13:23.257 } 00:13:23.257 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:23.257 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:23.257 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:23.257 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:23.257 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:23.257 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:23.257 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:23.257 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:23.257 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:23.258 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:23.517 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:24.084 request: 00:13:24.084 { 00:13:24.084 "name": "nvme0", 00:13:24.084 "trtype": "tcp", 00:13:24.084 "traddr": "10.0.0.3", 00:13:24.084 "adrfam": "ipv4", 00:13:24.084 "trsvcid": "4420", 00:13:24.084 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:24.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:24.084 "prchk_reftag": false, 00:13:24.084 "prchk_guard": false, 00:13:24.084 "hdgst": false, 00:13:24.084 "ddgst": false, 00:13:24.084 "dhchap_key": "key0", 00:13:24.084 "dhchap_ctrlr_key": "key1", 00:13:24.084 "allow_unrecognized_csi": false, 00:13:24.084 "method": "bdev_nvme_attach_controller", 00:13:24.084 "req_id": 1 00:13:24.084 } 00:13:24.084 Got JSON-RPC error response 00:13:24.084 response: 00:13:24.084 { 00:13:24.084 "code": -5, 00:13:24.084 "message": "Input/output error" 00:13:24.084 } 00:13:24.084 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:24.084 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:24.084 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:24.084 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:24.084 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:24.084 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:24.084 07:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:24.084 nvme0n1 00:13:24.343 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:24.343 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:24.343 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.602 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.602 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.602 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.602 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 00:13:24.602 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.602 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.860 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.860 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:24.860 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:24.860 07:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:25.796 nvme0n1 00:13:25.796 07:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:25.796 07:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.796 07:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:26.055 07:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.055 07:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:26.055 07:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.056 07:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.056 07:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.056 07:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:26.056 07:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.056 07:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:26.314 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.314 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:13:26.314 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid b4f53fcb-853f-493d-bd98-9a37948dacaf -l 0 --dhchap-secret DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: --dhchap-ctrl-secret DHHC-1:03:NjkwZjg3Yjk1Yzk2YTYwZTAzNmNmMWZmYTk3ZGM0ZDMyZGE1NTkzNzE4ZDg1NGNlNDhhM2VlZjgyZjRiMWJlNeJH/I4=: 00:13:26.885 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:26.885 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:26.885 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:26.885 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:26.885 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:26.885 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:26.885 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:26.885 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.885 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.148 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:27.148 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:27.148 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:27.148 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:27.148 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.148 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:27.148 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.148 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:27.148 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:27.148 07:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:27.715 request: 00:13:27.715 { 00:13:27.715 "name": "nvme0", 00:13:27.715 "trtype": "tcp", 00:13:27.715 "traddr": "10.0.0.3", 00:13:27.715 "adrfam": "ipv4", 00:13:27.715 "trsvcid": "4420", 00:13:27.715 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:27.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf", 00:13:27.715 "prchk_reftag": false, 00:13:27.715 "prchk_guard": false, 00:13:27.715 "hdgst": false, 00:13:27.715 "ddgst": false, 00:13:27.715 "dhchap_key": "key1", 00:13:27.715 "allow_unrecognized_csi": false, 00:13:27.715 "method": "bdev_nvme_attach_controller", 00:13:27.715 "req_id": 1 00:13:27.715 } 00:13:27.715 Got JSON-RPC error response 00:13:27.715 response: 00:13:27.715 { 00:13:27.715 "code": -5, 00:13:27.715 "message": "Input/output error" 00:13:27.715 } 00:13:27.715 07:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:27.715 07:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:27.715 07:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:27.715 07:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:27.715 07:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:27.715 07:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:27.715 07:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:28.650 nvme0n1 00:13:28.650 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:28.650 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:28.650 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.650 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.650 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.650 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.908 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:28.908 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.908 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.908 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.908 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:29.167 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:29.167 07:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:29.426 nvme0n1 00:13:29.426 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:29.426 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:29.426 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.684 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.684 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.684 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: '' 2s 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: ]] 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjU0MGFiNWIzOGVjNmE3YzFhNmNmNTYxMTU5MzExNTGEJSjV: 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:30.252 07:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:32.154 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:32.154 07:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: 2s 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: ]] 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDcwYzlmYWVhZWFmYTk1MDk5MWZjMzNkNmRiYjg1ZDhkYWQ0MTc4MTIyNDcyYjBjnyaP5Q==: 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:32.154 07:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:34.688 07:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:35.257 nvme0n1 00:13:35.257 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:35.257 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.257 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.257 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.257 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:35.257 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:35.824 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:35.824 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.824 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:36.083 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.083 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:36.083 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.083 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.083 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.083 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:36.083 07:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:36.343 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:36.343 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:36.343 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.670 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.670 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:36.670 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.670 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.670 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.670 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:36.670 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:36.670 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:36.670 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:36.670 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.670 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:36.670 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.670 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:36.670 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:36.928 request: 00:13:36.928 { 00:13:36.928 "name": "nvme0", 00:13:36.928 "dhchap_key": "key1", 00:13:36.928 "dhchap_ctrlr_key": "key3", 00:13:36.928 "method": "bdev_nvme_set_keys", 00:13:36.928 "req_id": 1 00:13:36.928 } 00:13:36.928 Got JSON-RPC error response 00:13:36.928 response: 00:13:36.928 { 00:13:36.928 "code": -13, 00:13:36.928 "message": "Permission denied" 00:13:36.928 } 00:13:37.187 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:37.187 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:37.187 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:37.187 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:37.187 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:37.187 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.187 07:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:37.446 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:37.446 07:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:38.382 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:38.382 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.382 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:38.641 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:38.641 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:38.641 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.641 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.641 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.641 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:38.641 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:38.641 07:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:39.578 nvme0n1 00:13:39.578 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:39.578 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.578 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.578 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.578 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:39.578 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:39.578 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:39.578 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:39.578 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.578 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:39.578 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.578 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:39.578 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:40.146 request: 00:13:40.146 { 00:13:40.146 "name": "nvme0", 00:13:40.146 "dhchap_key": "key2", 00:13:40.146 "dhchap_ctrlr_key": "key0", 00:13:40.146 "method": "bdev_nvme_set_keys", 00:13:40.146 "req_id": 1 00:13:40.146 } 00:13:40.146 Got JSON-RPC error response 00:13:40.146 response: 00:13:40.146 { 00:13:40.146 "code": -13, 00:13:40.146 "message": "Permission denied" 00:13:40.146 } 00:13:40.146 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:40.146 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.146 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.146 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.146 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:40.146 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:40.146 07:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.404 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:40.405 07:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:13:41.340 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:41.340 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:41.340 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.599 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:13:41.599 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:13:41.599 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:13:41.599 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67044 00:13:41.599 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 67044 ']' 00:13:41.599 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 67044 00:13:41.599 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:13:41.599 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:41.599 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67044 00:13:41.599 killing process with pid 67044 00:13:41.599 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:41.599 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:41.599 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67044' 00:13:41.599 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 67044 00:13:41.599 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 67044 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:42.171 rmmod nvme_tcp 00:13:42.171 rmmod nvme_fabrics 00:13:42.171 rmmod nvme_keyring 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70039 ']' 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70039 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 70039 ']' 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 70039 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:42.171 07:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70039 00:13:42.171 killing process with pid 70039 00:13:42.171 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:42.171 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:42.171 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70039' 00:13:42.171 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 70039 00:13:42.171 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 70039 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:42.430 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.5I3 /tmp/spdk.key-sha256.5UK /tmp/spdk.key-sha384.Kwk /tmp/spdk.key-sha512.f9H /tmp/spdk.key-sha512.DJV /tmp/spdk.key-sha384.TDD /tmp/spdk.key-sha256.DIj '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:42.689 00:13:42.689 real 3m3.759s 00:13:42.689 user 7m8.032s 00:13:42.689 sys 0m38.756s 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.689 ************************************ 00:13:42.689 END TEST nvmf_auth_target 00:13:42.689 ************************************ 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.689 ************************************ 00:13:42.689 START TEST nvmf_bdevio_no_huge 00:13:42.689 ************************************ 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:42.689 * Looking for test storage... 00:13:42.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:42.689 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:42.690 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:13:42.690 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.949 --rc genhtml_branch_coverage=1 00:13:42.949 --rc genhtml_function_coverage=1 00:13:42.949 --rc genhtml_legend=1 00:13:42.949 --rc geninfo_all_blocks=1 00:13:42.949 --rc geninfo_unexecuted_blocks=1 00:13:42.949 00:13:42.949 ' 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.949 --rc genhtml_branch_coverage=1 00:13:42.949 --rc genhtml_function_coverage=1 00:13:42.949 --rc genhtml_legend=1 00:13:42.949 --rc geninfo_all_blocks=1 00:13:42.949 --rc geninfo_unexecuted_blocks=1 00:13:42.949 00:13:42.949 ' 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.949 --rc genhtml_branch_coverage=1 00:13:42.949 --rc genhtml_function_coverage=1 00:13:42.949 --rc genhtml_legend=1 00:13:42.949 --rc geninfo_all_blocks=1 00:13:42.949 --rc geninfo_unexecuted_blocks=1 00:13:42.949 00:13:42.949 ' 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.949 --rc genhtml_branch_coverage=1 00:13:42.949 --rc genhtml_function_coverage=1 00:13:42.949 --rc genhtml_legend=1 00:13:42.949 --rc geninfo_all_blocks=1 00:13:42.949 --rc geninfo_unexecuted_blocks=1 00:13:42.949 00:13:42.949 ' 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.949 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.950 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:42.950 Cannot find device "nvmf_init_br" 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:42.950 Cannot find device "nvmf_init_br2" 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:42.950 Cannot find device "nvmf_tgt_br" 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:42.950 Cannot find device "nvmf_tgt_br2" 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:42.950 Cannot find device "nvmf_init_br" 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:42.950 Cannot find device "nvmf_init_br2" 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:42.950 Cannot find device "nvmf_tgt_br" 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:42.950 Cannot find device "nvmf_tgt_br2" 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:42.950 Cannot find device "nvmf_br" 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:42.950 Cannot find device "nvmf_init_if" 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:42.950 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:42.950 Cannot find device "nvmf_init_if2" 00:13:43.209 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:43.209 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:43.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.209 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:43.209 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:43.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.209 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:43.209 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:43.209 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:43.209 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:43.210 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:43.210 07:41:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:43.210 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:43.469 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:43.469 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:13:43.469 00:13:43.469 --- 10.0.0.3 ping statistics --- 00:13:43.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.469 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:43.469 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:43.469 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:13:43.469 00:13:43.469 --- 10.0.0.4 ping statistics --- 00:13:43.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.469 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:43.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:43.469 00:13:43.469 --- 10.0.0.1 ping statistics --- 00:13:43.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.469 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:43.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:13:43.469 00:13:43.469 --- 10.0.0.2 ping statistics --- 00:13:43.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.469 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70663 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70663 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 70663 ']' 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:43.469 07:41:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:43.469 [2024-11-08 07:41:01.336645] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:13:43.469 [2024-11-08 07:41:01.336896] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:43.728 [2024-11-08 07:41:01.512117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:43.728 [2024-11-08 07:41:01.605348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.729 [2024-11-08 07:41:01.605905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.729 [2024-11-08 07:41:01.606455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.729 [2024-11-08 07:41:01.606995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.729 [2024-11-08 07:41:01.607281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.729 [2024-11-08 07:41:01.608212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:43.729 [2024-11-08 07:41:01.608328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:43.729 [2024-11-08 07:41:01.608493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:43.729 [2024-11-08 07:41:01.608502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.729 [2024-11-08 07:41:01.615350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:44.667 [2024-11-08 07:41:02.407692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:44.667 Malloc0 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:44.667 [2024-11-08 07:41:02.447862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:44.667 { 00:13:44.667 "params": { 00:13:44.667 "name": "Nvme$subsystem", 00:13:44.667 "trtype": "$TEST_TRANSPORT", 00:13:44.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:44.667 "adrfam": "ipv4", 00:13:44.667 "trsvcid": "$NVMF_PORT", 00:13:44.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:44.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:44.667 "hdgst": ${hdgst:-false}, 00:13:44.667 "ddgst": ${ddgst:-false} 00:13:44.667 }, 00:13:44.667 "method": "bdev_nvme_attach_controller" 00:13:44.667 } 00:13:44.667 EOF 00:13:44.667 )") 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:13:44.667 07:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:44.667 "params": { 00:13:44.667 "name": "Nvme1", 00:13:44.667 "trtype": "tcp", 00:13:44.667 "traddr": "10.0.0.3", 00:13:44.667 "adrfam": "ipv4", 00:13:44.667 "trsvcid": "4420", 00:13:44.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:44.667 "hdgst": false, 00:13:44.667 "ddgst": false 00:13:44.667 }, 00:13:44.667 "method": "bdev_nvme_attach_controller" 00:13:44.667 }' 00:13:44.667 [2024-11-08 07:41:02.497500] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:13:44.667 [2024-11-08 07:41:02.497570] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70699 ] 00:13:44.926 [2024-11-08 07:41:02.648962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:44.926 [2024-11-08 07:41:02.744726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.926 [2024-11-08 07:41:02.744896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.926 [2024-11-08 07:41:02.744901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.926 [2024-11-08 07:41:02.760485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:45.185 I/O targets: 00:13:45.185 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:45.185 00:13:45.185 00:13:45.185 CUnit - A unit testing framework for C - Version 2.1-3 00:13:45.185 http://cunit.sourceforge.net/ 00:13:45.185 00:13:45.185 00:13:45.185 Suite: bdevio tests on: Nvme1n1 00:13:45.185 Test: blockdev write read block ...passed 00:13:45.185 Test: blockdev write zeroes read block ...passed 00:13:45.185 Test: blockdev write zeroes read no split ...passed 00:13:45.185 Test: blockdev write zeroes read split ...passed 00:13:45.185 Test: blockdev write zeroes read split partial ...passed 00:13:45.185 Test: blockdev reset ...[2024-11-08 07:41:03.018783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:45.185 [2024-11-08 07:41:03.019193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ca310 (9): Bad file descriptor 00:13:45.185 [2024-11-08 07:41:03.039050] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:45.185 passed 00:13:45.185 Test: blockdev write read 8 blocks ...passed 00:13:45.185 Test: blockdev write read size > 128k ...passed 00:13:45.185 Test: blockdev write read invalid size ...passed 00:13:45.185 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:45.185 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:45.185 Test: blockdev write read max offset ...passed 00:13:45.185 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:45.185 Test: blockdev writev readv 8 blocks ...passed 00:13:45.185 Test: blockdev writev readv 30 x 1block ...passed 00:13:45.185 Test: blockdev writev readv block ...passed 00:13:45.185 Test: blockdev writev readv size > 128k ...passed 00:13:45.185 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:45.185 Test: blockdev comparev and writev ...[2024-11-08 07:41:03.048234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.185 [2024-11-08 07:41:03.048382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:45.185 [2024-11-08 07:41:03.048409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.185 [2024-11-08 07:41:03.048425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:45.185 [2024-11-08 07:41:03.048663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.185 [2024-11-08 07:41:03.048684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:45.185 [2024-11-08 07:41:03.048701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.185 [2024-11-08 07:41:03.048713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:45.185 [2024-11-08 07:41:03.048934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.185 [2024-11-08 07:41:03.048952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:45.185 [2024-11-08 07:41:03.048969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.185 [2024-11-08 07:41:03.048993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:45.185 [2024-11-08 07:41:03.049233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.185 [2024-11-08 07:41:03.049355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:45.185 [2024-11-08 07:41:03.049443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:45.185 passed 00:13:45.185 Test: blockdev nvme passthru rw ...passed 00:13:45.185 Test: blockdev nvme passthru vendor specific ...[2024-11-08 07:41:03.049604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:45.185 [2024-11-08 07:41:03.050207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:45.185 [2024-11-08 07:41:03.050416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:45.185 [2024-11-08 07:41:03.050613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:45.185 [2024-11-08 07:41:03.050794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:45.185 [2024-11-08 07:41:03.050968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:45.185 [2024-11-08 07:41:03.051135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:45.185 [2024-11-08 07:41:03.051335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:45.185 [2024-11-08 07:41:03.051470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:45.185 passed 00:13:45.185 Test: blockdev nvme admin passthru ...passed 00:13:45.185 Test: blockdev copy ...passed 00:13:45.185 00:13:45.185 Run Summary: Type Total Ran Passed Failed Inactive 00:13:45.185 suites 1 1 n/a 0 0 00:13:45.185 tests 23 23 23 0 0 00:13:45.185 asserts 152 152 152 0 n/a 00:13:45.185 00:13:45.185 Elapsed time = 0.175 seconds 00:13:45.444 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.444 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.444 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:45.702 rmmod nvme_tcp 00:13:45.702 rmmod nvme_fabrics 00:13:45.702 rmmod nvme_keyring 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70663 ']' 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70663 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 70663 ']' 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 70663 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70663 00:13:45.702 killing process with pid 70663 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70663' 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 70663 00:13:45.702 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 70663 00:13:46.270 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:46.270 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:46.270 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:46.270 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:46.270 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:13:46.270 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:13:46.270 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:46.270 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:46.270 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:46.270 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:46.270 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:46.270 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:46.270 07:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:46.270 00:13:46.270 real 0m3.695s 00:13:46.270 user 0m10.726s 00:13:46.270 sys 0m1.566s 00:13:46.270 ************************************ 00:13:46.270 END TEST nvmf_bdevio_no_huge 00:13:46.270 ************************************ 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:46.270 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:46.574 ************************************ 00:13:46.574 START TEST nvmf_tls 00:13:46.574 ************************************ 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:46.574 * Looking for test storage... 00:13:46.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:46.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.574 --rc genhtml_branch_coverage=1 00:13:46.574 --rc genhtml_function_coverage=1 00:13:46.574 --rc genhtml_legend=1 00:13:46.574 --rc geninfo_all_blocks=1 00:13:46.574 --rc geninfo_unexecuted_blocks=1 00:13:46.574 00:13:46.574 ' 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:46.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.574 --rc genhtml_branch_coverage=1 00:13:46.574 --rc genhtml_function_coverage=1 00:13:46.574 --rc genhtml_legend=1 00:13:46.574 --rc geninfo_all_blocks=1 00:13:46.574 --rc geninfo_unexecuted_blocks=1 00:13:46.574 00:13:46.574 ' 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:46.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.574 --rc genhtml_branch_coverage=1 00:13:46.574 --rc genhtml_function_coverage=1 00:13:46.574 --rc genhtml_legend=1 00:13:46.574 --rc geninfo_all_blocks=1 00:13:46.574 --rc geninfo_unexecuted_blocks=1 00:13:46.574 00:13:46.574 ' 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:46.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.574 --rc genhtml_branch_coverage=1 00:13:46.574 --rc genhtml_function_coverage=1 00:13:46.574 --rc genhtml_legend=1 00:13:46.574 --rc geninfo_all_blocks=1 00:13:46.574 --rc geninfo_unexecuted_blocks=1 00:13:46.574 00:13:46.574 ' 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.574 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.833 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:46.833 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:13:46.833 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:46.834 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:46.834 Cannot find device "nvmf_init_br" 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:46.834 Cannot find device "nvmf_init_br2" 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:46.834 Cannot find device "nvmf_tgt_br" 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:46.834 Cannot find device "nvmf_tgt_br2" 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:46.834 Cannot find device "nvmf_init_br" 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:46.834 Cannot find device "nvmf_init_br2" 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:46.834 Cannot find device "nvmf_tgt_br" 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:46.834 Cannot find device "nvmf_tgt_br2" 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:46.834 Cannot find device "nvmf_br" 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:46.834 Cannot find device "nvmf_init_if" 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:46.834 Cannot find device "nvmf_init_if2" 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:46.834 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:46.834 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:46.834 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.835 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:46.835 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:46.835 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:46.835 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:46.835 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:46.835 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:46.835 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:46.835 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:47.093 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:47.093 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:13:47.093 00:13:47.093 --- 10.0.0.3 ping statistics --- 00:13:47.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.093 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:13:47.093 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:47.093 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:47.093 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:13:47.094 00:13:47.094 --- 10.0.0.4 ping statistics --- 00:13:47.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.094 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:47.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:47.094 00:13:47.094 --- 10.0.0.1 ping statistics --- 00:13:47.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.094 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:47.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:13:47.094 00:13:47.094 --- 10.0.0.2 ping statistics --- 00:13:47.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.094 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:47.094 07:41:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.094 07:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=70935 00:13:47.094 07:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 70935 00:13:47.094 07:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 70935 ']' 00:13:47.094 07:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:47.094 07:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.094 07:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:47.094 07:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.094 07:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:47.094 07:41:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.353 [2024-11-08 07:41:05.063836] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:13:47.353 [2024-11-08 07:41:05.064157] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.353 [2024-11-08 07:41:05.227029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.353 [2024-11-08 07:41:05.285268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.353 [2024-11-08 07:41:05.285330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.353 [2024-11-08 07:41:05.285346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.353 [2024-11-08 07:41:05.285359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.353 [2024-11-08 07:41:05.285370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.353 [2024-11-08 07:41:05.285727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.289 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:48.289 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:48.289 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:48.289 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:48.289 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.289 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.289 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:48.289 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:48.548 true 00:13:48.548 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:48.548 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:48.806 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:48.806 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:48.806 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:48.806 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:48.806 07:41:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:49.374 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:49.374 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:49.374 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:49.374 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:49.374 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:49.633 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:49.633 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:49.633 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:49.633 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:49.892 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:49.892 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:49.892 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:50.151 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:50.151 07:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:50.410 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:50.410 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:50.410 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:50.669 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:50.669 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.8roE32jyJM 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.9q8k1FzLBY 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.8roE32jyJM 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.9q8k1FzLBY 00:13:50.928 07:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:51.187 07:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:51.445 [2024-11-08 07:41:09.312194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:51.445 07:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.8roE32jyJM 00:13:51.445 07:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8roE32jyJM 00:13:51.445 07:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:51.704 [2024-11-08 07:41:09.541670] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.704 07:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:51.962 07:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:51.962 [2024-11-08 07:41:09.917722] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:51.962 [2024-11-08 07:41:09.917938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:52.220 07:41:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:52.479 malloc0 00:13:52.479 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:52.479 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8roE32jyJM 00:13:52.737 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:52.996 07:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.8roE32jyJM 00:14:05.202 Initializing NVMe Controllers 00:14:05.202 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:05.202 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:05.202 Initialization complete. Launching workers. 00:14:05.202 ======================================================== 00:14:05.202 Latency(us) 00:14:05.202 Device Information : IOPS MiB/s Average min max 00:14:05.202 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13760.73 53.75 4651.42 1534.94 5971.23 00:14:05.202 ======================================================== 00:14:05.203 Total : 13760.73 53.75 4651.42 1534.94 5971.23 00:14:05.203 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8roE32jyJM 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8roE32jyJM 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71169 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71169 /var/tmp/bdevperf.sock 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71169 ']' 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:05.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:05.203 07:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.203 [2024-11-08 07:41:21.136639] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:05.203 [2024-11-08 07:41:21.137176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71169 ] 00:14:05.203 [2024-11-08 07:41:21.298414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.203 [2024-11-08 07:41:21.361960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.203 [2024-11-08 07:41:21.410053] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:05.203 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:05.203 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:05.203 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8roE32jyJM 00:14:05.203 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:05.203 [2024-11-08 07:41:22.439689] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:05.203 TLSTESTn1 00:14:05.203 07:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:05.203 Running I/O for 10 seconds... 00:14:06.707 5774.00 IOPS, 22.55 MiB/s [2024-11-08T07:41:26.047Z] 5780.50 IOPS, 22.58 MiB/s [2024-11-08T07:41:26.983Z] 5784.33 IOPS, 22.60 MiB/s [2024-11-08T07:41:27.919Z] 5791.75 IOPS, 22.62 MiB/s [2024-11-08T07:41:28.855Z] 5806.80 IOPS, 22.68 MiB/s [2024-11-08T07:41:29.790Z] 5795.33 IOPS, 22.64 MiB/s [2024-11-08T07:41:30.723Z] 5782.86 IOPS, 22.59 MiB/s [2024-11-08T07:41:32.107Z] 5783.50 IOPS, 22.59 MiB/s [2024-11-08T07:41:32.691Z] 5785.11 IOPS, 22.60 MiB/s [2024-11-08T07:41:32.691Z] 5784.80 IOPS, 22.60 MiB/s 00:14:14.730 Latency(us) 00:14:14.730 [2024-11-08T07:41:32.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.730 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:14.730 Verification LBA range: start 0x0 length 0x2000 00:14:14.730 TLSTESTn1 : 10.01 5790.32 22.62 0.00 0.00 22071.89 4306.65 16477.62 00:14:14.730 [2024-11-08T07:41:32.691Z] =================================================================================================================== 00:14:14.730 [2024-11-08T07:41:32.691Z] Total : 5790.32 22.62 0.00 0.00 22071.89 4306.65 16477.62 00:14:14.730 { 00:14:14.730 "results": [ 00:14:14.730 { 00:14:14.730 "job": "TLSTESTn1", 00:14:14.730 "core_mask": "0x4", 00:14:14.730 "workload": "verify", 00:14:14.730 "status": "finished", 00:14:14.730 "verify_range": { 00:14:14.730 "start": 0, 00:14:14.730 "length": 8192 00:14:14.730 }, 00:14:14.730 "queue_depth": 128, 00:14:14.730 "io_size": 4096, 00:14:14.730 "runtime": 10.012056, 00:14:14.730 "iops": 5790.31919118311, 00:14:14.730 "mibps": 22.618434340559023, 00:14:14.730 "io_failed": 0, 00:14:14.730 "io_timeout": 0, 00:14:14.730 "avg_latency_us": 22071.893660316422, 00:14:14.730 "min_latency_us": 4306.651428571428, 00:14:14.730 "max_latency_us": 16477.62285714286 00:14:14.730 } 00:14:14.730 ], 00:14:14.730 "core_count": 1 00:14:14.730 } 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71169 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71169 ']' 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71169 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71169 00:14:14.989 killing process with pid 71169 00:14:14.989 Received shutdown signal, test time was about 10.000000 seconds 00:14:14.989 00:14:14.989 Latency(us) 00:14:14.989 [2024-11-08T07:41:32.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.989 [2024-11-08T07:41:32.950Z] =================================================================================================================== 00:14:14.989 [2024-11-08T07:41:32.950Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71169' 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71169 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71169 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9q8k1FzLBY 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9q8k1FzLBY 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9q8k1FzLBY 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:14.989 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9q8k1FzLBY 00:14:14.990 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:14.990 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71310 00:14:14.990 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:14.990 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.990 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71310 /var/tmp/bdevperf.sock 00:14:14.990 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71310 ']' 00:14:14.990 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.990 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:14.990 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.990 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:14.990 07:41:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.249 [2024-11-08 07:41:32.962281] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:15.249 [2024-11-08 07:41:32.963293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71310 ] 00:14:15.249 [2024-11-08 07:41:33.113497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.249 [2024-11-08 07:41:33.161194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.249 [2024-11-08 07:41:33.203039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:16.183 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:16.183 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:16.183 07:41:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9q8k1FzLBY 00:14:16.442 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:16.442 [2024-11-08 07:41:34.376338] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:16.442 [2024-11-08 07:41:34.386763] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:16.442 [2024-11-08 07:41:34.386782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2327fb0 (107): Transport endpoint is not connected 00:14:16.442 [2024-11-08 07:41:34.387771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2327fb0 (9): Bad file descriptor 00:14:16.442 [2024-11-08 07:41:34.388770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:16.442 [2024-11-08 07:41:34.388791] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:16.442 [2024-11-08 07:41:34.388800] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:16.442 [2024-11-08 07:41:34.388813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:16.442 request: 00:14:16.442 { 00:14:16.442 "name": "TLSTEST", 00:14:16.442 "trtype": "tcp", 00:14:16.442 "traddr": "10.0.0.3", 00:14:16.442 "adrfam": "ipv4", 00:14:16.442 "trsvcid": "4420", 00:14:16.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:16.442 "prchk_reftag": false, 00:14:16.442 "prchk_guard": false, 00:14:16.442 "hdgst": false, 00:14:16.442 "ddgst": false, 00:14:16.442 "psk": "key0", 00:14:16.442 "allow_unrecognized_csi": false, 00:14:16.442 "method": "bdev_nvme_attach_controller", 00:14:16.442 "req_id": 1 00:14:16.442 } 00:14:16.442 Got JSON-RPC error response 00:14:16.442 response: 00:14:16.442 { 00:14:16.442 "code": -5, 00:14:16.442 "message": "Input/output error" 00:14:16.442 } 00:14:16.701 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71310 00:14:16.701 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71310 ']' 00:14:16.701 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71310 00:14:16.701 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:16.701 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:16.701 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71310 00:14:16.701 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:16.701 killing process with pid 71310 00:14:16.701 Received shutdown signal, test time was about 10.000000 seconds 00:14:16.701 00:14:16.701 Latency(us) 00:14:16.701 [2024-11-08T07:41:34.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.701 [2024-11-08T07:41:34.662Z] =================================================================================================================== 00:14:16.701 [2024-11-08T07:41:34.662Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:16.701 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:16.701 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71310' 00:14:16.701 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71310 00:14:16.701 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71310 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8roE32jyJM 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8roE32jyJM 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8roE32jyJM 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8roE32jyJM 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71335 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71335 /var/tmp/bdevperf.sock 00:14:16.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71335 ']' 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:16.702 07:41:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.961 [2024-11-08 07:41:34.667020] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:16.961 [2024-11-08 07:41:34.667112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71335 ] 00:14:16.961 [2024-11-08 07:41:34.814132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.961 [2024-11-08 07:41:34.864277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.961 [2024-11-08 07:41:34.905810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:17.896 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:17.896 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:17.896 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8roE32jyJM 00:14:18.155 07:41:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:18.414 [2024-11-08 07:41:36.129971] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:18.414 [2024-11-08 07:41:36.136833] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:18.414 [2024-11-08 07:41:36.137005] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:18.414 [2024-11-08 07:41:36.137056] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:18.414 [2024-11-08 07:41:36.137173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e00fb0 (107): Transport endpoint is not connected 00:14:18.414 [2024-11-08 07:41:36.138163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e00fb0 (9): Bad file descriptor 00:14:18.414 [2024-11-08 07:41:36.139161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:18.414 [2024-11-08 07:41:36.139176] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:18.414 [2024-11-08 07:41:36.139186] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:18.414 [2024-11-08 07:41:36.139200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:18.414 request: 00:14:18.414 { 00:14:18.414 "name": "TLSTEST", 00:14:18.414 "trtype": "tcp", 00:14:18.414 "traddr": "10.0.0.3", 00:14:18.414 "adrfam": "ipv4", 00:14:18.414 "trsvcid": "4420", 00:14:18.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.414 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:18.414 "prchk_reftag": false, 00:14:18.414 "prchk_guard": false, 00:14:18.414 "hdgst": false, 00:14:18.414 "ddgst": false, 00:14:18.414 "psk": "key0", 00:14:18.414 "allow_unrecognized_csi": false, 00:14:18.414 "method": "bdev_nvme_attach_controller", 00:14:18.414 "req_id": 1 00:14:18.414 } 00:14:18.414 Got JSON-RPC error response 00:14:18.414 response: 00:14:18.414 { 00:14:18.414 "code": -5, 00:14:18.414 "message": "Input/output error" 00:14:18.414 } 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71335 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71335 ']' 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71335 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71335 00:14:18.414 killing process with pid 71335 00:14:18.414 Received shutdown signal, test time was about 10.000000 seconds 00:14:18.414 00:14:18.414 Latency(us) 00:14:18.414 [2024-11-08T07:41:36.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.414 [2024-11-08T07:41:36.375Z] =================================================================================================================== 00:14:18.414 [2024-11-08T07:41:36.375Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71335' 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71335 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71335 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8roE32jyJM 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8roE32jyJM 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8roE32jyJM 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:18.414 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8roE32jyJM 00:14:18.415 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:18.415 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71369 00:14:18.415 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:18.415 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:18.415 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71369 /var/tmp/bdevperf.sock 00:14:18.415 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71369 ']' 00:14:18.415 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.415 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:18.415 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.415 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:18.415 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.673 [2024-11-08 07:41:36.406787] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:18.673 [2024-11-08 07:41:36.406857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71369 ] 00:14:18.673 [2024-11-08 07:41:36.544550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.673 [2024-11-08 07:41:36.589653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.673 [2024-11-08 07:41:36.631401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.932 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:18.932 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:18.932 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8roE32jyJM 00:14:18.932 07:41:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:19.191 [2024-11-08 07:41:37.055334] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:19.191 [2024-11-08 07:41:37.066931] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:19.191 [2024-11-08 07:41:37.067114] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:19.191 [2024-11-08 07:41:37.067239] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:19.191 [2024-11-08 07:41:37.067713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c12fb0 (107): Transport endpoint is not connected 00:14:19.191 [2024-11-08 07:41:37.068706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c12fb0 (9): Bad file descriptor 00:14:19.191 [2024-11-08 07:41:37.069703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:19.191 [2024-11-08 07:41:37.069812] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:19.191 [2024-11-08 07:41:37.069878] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:19.191 [2024-11-08 07:41:37.069935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:19.191 request: 00:14:19.191 { 00:14:19.191 "name": "TLSTEST", 00:14:19.191 "trtype": "tcp", 00:14:19.191 "traddr": "10.0.0.3", 00:14:19.191 "adrfam": "ipv4", 00:14:19.191 "trsvcid": "4420", 00:14:19.191 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:19.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:19.191 "prchk_reftag": false, 00:14:19.191 "prchk_guard": false, 00:14:19.191 "hdgst": false, 00:14:19.191 "ddgst": false, 00:14:19.191 "psk": "key0", 00:14:19.191 "allow_unrecognized_csi": false, 00:14:19.191 "method": "bdev_nvme_attach_controller", 00:14:19.191 "req_id": 1 00:14:19.191 } 00:14:19.191 Got JSON-RPC error response 00:14:19.191 response: 00:14:19.191 { 00:14:19.191 "code": -5, 00:14:19.191 "message": "Input/output error" 00:14:19.191 } 00:14:19.191 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71369 00:14:19.191 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71369 ']' 00:14:19.191 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71369 00:14:19.191 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:19.191 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:19.191 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71369 00:14:19.191 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:19.191 killing process with pid 71369 00:14:19.191 Received shutdown signal, test time was about 10.000000 seconds 00:14:19.191 00:14:19.191 Latency(us) 00:14:19.191 [2024-11-08T07:41:37.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.191 [2024-11-08T07:41:37.152Z] =================================================================================================================== 00:14:19.191 [2024-11-08T07:41:37.152Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:19.191 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:19.191 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71369' 00:14:19.191 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71369 00:14:19.191 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71369 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71390 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71390 /var/tmp/bdevperf.sock 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71390 ']' 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:19.450 07:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.450 [2024-11-08 07:41:37.346466] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:19.450 [2024-11-08 07:41:37.346781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71390 ] 00:14:19.709 [2024-11-08 07:41:37.490783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.709 [2024-11-08 07:41:37.537220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.709 [2024-11-08 07:41:37.578787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:20.648 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:20.648 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:20.648 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:20.648 [2024-11-08 07:41:38.482388] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:20.648 [2024-11-08 07:41:38.482430] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:20.648 request: 00:14:20.648 { 00:14:20.648 "name": "key0", 00:14:20.648 "path": "", 00:14:20.648 "method": "keyring_file_add_key", 00:14:20.648 "req_id": 1 00:14:20.648 } 00:14:20.648 Got JSON-RPC error response 00:14:20.648 response: 00:14:20.648 { 00:14:20.648 "code": -1, 00:14:20.648 "message": "Operation not permitted" 00:14:20.648 } 00:14:20.648 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:20.908 [2024-11-08 07:41:38.666534] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:20.908 [2024-11-08 07:41:38.666586] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:20.908 request: 00:14:20.908 { 00:14:20.908 "name": "TLSTEST", 00:14:20.908 "trtype": "tcp", 00:14:20.908 "traddr": "10.0.0.3", 00:14:20.908 "adrfam": "ipv4", 00:14:20.908 "trsvcid": "4420", 00:14:20.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:20.908 "prchk_reftag": false, 00:14:20.908 "prchk_guard": false, 00:14:20.908 "hdgst": false, 00:14:20.908 "ddgst": false, 00:14:20.908 "psk": "key0", 00:14:20.908 "allow_unrecognized_csi": false, 00:14:20.908 "method": "bdev_nvme_attach_controller", 00:14:20.908 "req_id": 1 00:14:20.908 } 00:14:20.908 Got JSON-RPC error response 00:14:20.908 response: 00:14:20.908 { 00:14:20.908 "code": -126, 00:14:20.908 "message": "Required key not available" 00:14:20.908 } 00:14:20.908 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71390 00:14:20.908 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71390 ']' 00:14:20.908 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71390 00:14:20.908 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:20.908 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:20.908 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71390 00:14:20.908 killing process with pid 71390 00:14:20.908 Received shutdown signal, test time was about 10.000000 seconds 00:14:20.908 00:14:20.908 Latency(us) 00:14:20.908 [2024-11-08T07:41:38.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.908 [2024-11-08T07:41:38.869Z] =================================================================================================================== 00:14:20.908 [2024-11-08T07:41:38.869Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:20.908 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:20.908 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:20.908 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71390' 00:14:20.908 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71390 00:14:20.908 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71390 00:14:21.167 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 70935 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 70935 ']' 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 70935 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70935 00:14:21.168 killing process with pid 70935 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70935' 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 70935 00:14:21.168 07:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 70935 00:14:21.168 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:21.168 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:21.168 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:21.168 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:21.168 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:21.168 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:21.168 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:21.168 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Qkew0YUEwh 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Qkew0YUEwh 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71429 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71429 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71429 ']' 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:21.427 07:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.427 [2024-11-08 07:41:39.202243] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:21.427 [2024-11-08 07:41:39.203299] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.427 [2024-11-08 07:41:39.355815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.687 [2024-11-08 07:41:39.405247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.687 [2024-11-08 07:41:39.405461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.687 [2024-11-08 07:41:39.405552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.687 [2024-11-08 07:41:39.405600] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.687 [2024-11-08 07:41:39.405626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.687 [2024-11-08 07:41:39.405927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.687 [2024-11-08 07:41:39.447742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:22.253 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:22.253 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:22.253 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:22.253 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:22.253 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.253 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.253 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Qkew0YUEwh 00:14:22.253 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Qkew0YUEwh 00:14:22.253 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:22.512 [2024-11-08 07:41:40.456733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.772 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:23.031 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:23.031 [2024-11-08 07:41:40.936803] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:23.031 [2024-11-08 07:41:40.937246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:23.031 07:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:23.290 malloc0 00:14:23.290 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:23.548 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Qkew0YUEwh 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qkew0YUEwh 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Qkew0YUEwh 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71484 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71484 /var/tmp/bdevperf.sock 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71484 ']' 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:23.808 07:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.067 [2024-11-08 07:41:41.771743] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:24.067 [2024-11-08 07:41:41.771817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71484 ] 00:14:24.067 [2024-11-08 07:41:41.912308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.067 [2024-11-08 07:41:41.960199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.067 [2024-11-08 07:41:42.002321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:24.327 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:24.327 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:24.327 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qkew0YUEwh 00:14:24.327 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:24.586 [2024-11-08 07:41:42.418660] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:24.586 TLSTESTn1 00:14:24.586 07:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:24.845 Running I/O for 10 seconds... 00:14:26.767 5832.00 IOPS, 22.78 MiB/s [2024-11-08T07:41:45.665Z] 5788.00 IOPS, 22.61 MiB/s [2024-11-08T07:41:47.044Z] 5783.33 IOPS, 22.59 MiB/s [2024-11-08T07:41:47.979Z] 5775.25 IOPS, 22.56 MiB/s [2024-11-08T07:41:48.917Z] 5771.80 IOPS, 22.55 MiB/s [2024-11-08T07:41:49.852Z] 5780.67 IOPS, 22.58 MiB/s [2024-11-08T07:41:50.789Z] 5773.57 IOPS, 22.55 MiB/s [2024-11-08T07:41:51.726Z] 5767.88 IOPS, 22.53 MiB/s [2024-11-08T07:41:52.663Z] 5760.78 IOPS, 22.50 MiB/s [2024-11-08T07:41:52.663Z] 5753.10 IOPS, 22.47 MiB/s 00:14:34.702 Latency(us) 00:14:34.702 [2024-11-08T07:41:52.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.702 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:34.702 Verification LBA range: start 0x0 length 0x2000 00:14:34.702 TLSTESTn1 : 10.01 5758.54 22.49 0.00 0.00 22193.38 4493.90 16227.96 00:14:34.702 [2024-11-08T07:41:52.663Z] =================================================================================================================== 00:14:34.702 [2024-11-08T07:41:52.663Z] Total : 5758.54 22.49 0.00 0.00 22193.38 4493.90 16227.96 00:14:34.702 { 00:14:34.702 "results": [ 00:14:34.702 { 00:14:34.702 "job": "TLSTESTn1", 00:14:34.702 "core_mask": "0x4", 00:14:34.702 "workload": "verify", 00:14:34.702 "status": "finished", 00:14:34.702 "verify_range": { 00:14:34.702 "start": 0, 00:14:34.702 "length": 8192 00:14:34.702 }, 00:14:34.702 "queue_depth": 128, 00:14:34.702 "io_size": 4096, 00:14:34.702 "runtime": 10.012431, 00:14:34.702 "iops": 5758.541556990505, 00:14:34.702 "mibps": 22.49430295699416, 00:14:34.702 "io_failed": 0, 00:14:34.702 "io_timeout": 0, 00:14:34.702 "avg_latency_us": 22193.3789068853, 00:14:34.702 "min_latency_us": 4493.897142857143, 00:14:34.702 "max_latency_us": 16227.961904761905 00:14:34.702 } 00:14:34.702 ], 00:14:34.702 "core_count": 1 00:14:34.702 } 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71484 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71484 ']' 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71484 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71484 00:14:34.961 killing process with pid 71484 00:14:34.961 Received shutdown signal, test time was about 10.000000 seconds 00:14:34.961 00:14:34.961 Latency(us) 00:14:34.961 [2024-11-08T07:41:52.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.961 [2024-11-08T07:41:52.922Z] =================================================================================================================== 00:14:34.961 [2024-11-08T07:41:52.922Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71484' 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71484 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71484 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Qkew0YUEwh 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qkew0YUEwh 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qkew0YUEwh 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:34.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qkew0YUEwh 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Qkew0YUEwh 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71614 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71614 /var/tmp/bdevperf.sock 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71614 ']' 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:34.961 07:41:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.220 [2024-11-08 07:41:52.924566] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:35.220 [2024-11-08 07:41:52.924786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71614 ] 00:14:35.220 [2024-11-08 07:41:53.063854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.220 [2024-11-08 07:41:53.113329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.220 [2024-11-08 07:41:53.155407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:35.479 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:35.479 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:35.479 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qkew0YUEwh 00:14:35.738 [2024-11-08 07:41:53.443686] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Qkew0YUEwh': 0100666 00:14:35.738 [2024-11-08 07:41:53.443939] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:35.738 request: 00:14:35.738 { 00:14:35.738 "name": "key0", 00:14:35.738 "path": "/tmp/tmp.Qkew0YUEwh", 00:14:35.738 "method": "keyring_file_add_key", 00:14:35.738 "req_id": 1 00:14:35.738 } 00:14:35.738 Got JSON-RPC error response 00:14:35.738 response: 00:14:35.738 { 00:14:35.738 "code": -1, 00:14:35.738 "message": "Operation not permitted" 00:14:35.738 } 00:14:35.738 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:35.997 [2024-11-08 07:41:53.715835] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:35.997 [2024-11-08 07:41:53.716119] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:35.997 request: 00:14:35.997 { 00:14:35.997 "name": "TLSTEST", 00:14:35.997 "trtype": "tcp", 00:14:35.997 "traddr": "10.0.0.3", 00:14:35.997 "adrfam": "ipv4", 00:14:35.997 "trsvcid": "4420", 00:14:35.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:35.997 "prchk_reftag": false, 00:14:35.997 "prchk_guard": false, 00:14:35.997 "hdgst": false, 00:14:35.997 "ddgst": false, 00:14:35.997 "psk": "key0", 00:14:35.997 "allow_unrecognized_csi": false, 00:14:35.997 "method": "bdev_nvme_attach_controller", 00:14:35.997 "req_id": 1 00:14:35.997 } 00:14:35.997 Got JSON-RPC error response 00:14:35.997 response: 00:14:35.997 { 00:14:35.997 "code": -126, 00:14:35.997 "message": "Required key not available" 00:14:35.997 } 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71614 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71614 ']' 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71614 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71614 00:14:35.997 killing process with pid 71614 00:14:35.997 Received shutdown signal, test time was about 10.000000 seconds 00:14:35.997 00:14:35.997 Latency(us) 00:14:35.997 [2024-11-08T07:41:53.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.997 [2024-11-08T07:41:53.958Z] =================================================================================================================== 00:14:35.997 [2024-11-08T07:41:53.958Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71614' 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71614 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71614 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71429 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71429 ']' 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71429 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:35.997 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71429 00:14:36.260 killing process with pid 71429 00:14:36.260 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:36.260 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:36.260 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71429' 00:14:36.260 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71429 00:14:36.260 07:41:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71429 00:14:36.260 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:36.260 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:36.260 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:36.260 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.260 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71640 00:14:36.260 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:36.260 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71640 00:14:36.260 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71640 ']' 00:14:36.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.260 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.260 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:36.260 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.260 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:36.260 07:41:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.260 [2024-11-08 07:41:54.214394] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:36.260 [2024-11-08 07:41:54.214491] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.520 [2024-11-08 07:41:54.364561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.520 [2024-11-08 07:41:54.414096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.520 [2024-11-08 07:41:54.414159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.520 [2024-11-08 07:41:54.414169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.520 [2024-11-08 07:41:54.414193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.520 [2024-11-08 07:41:54.414200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.520 [2024-11-08 07:41:54.414480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.520 [2024-11-08 07:41:54.456082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Qkew0YUEwh 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Qkew0YUEwh 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.Qkew0YUEwh 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Qkew0YUEwh 00:14:37.458 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:37.718 [2024-11-08 07:41:55.437284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.718 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:37.976 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:38.235 [2024-11-08 07:41:55.965390] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:38.236 [2024-11-08 07:41:55.965595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:38.236 07:41:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:38.236 malloc0 00:14:38.236 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:38.494 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Qkew0YUEwh 00:14:38.753 [2024-11-08 07:41:56.558358] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Qkew0YUEwh': 0100666 00:14:38.753 [2024-11-08 07:41:56.558395] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:38.753 request: 00:14:38.753 { 00:14:38.753 "name": "key0", 00:14:38.753 "path": "/tmp/tmp.Qkew0YUEwh", 00:14:38.753 "method": "keyring_file_add_key", 00:14:38.753 "req_id": 1 00:14:38.753 } 00:14:38.753 Got JSON-RPC error response 00:14:38.753 response: 00:14:38.753 { 00:14:38.753 "code": -1, 00:14:38.753 "message": "Operation not permitted" 00:14:38.753 } 00:14:38.753 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:39.012 [2024-11-08 07:41:56.762416] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:39.012 [2024-11-08 07:41:56.762689] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:39.012 request: 00:14:39.012 { 00:14:39.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.012 "host": "nqn.2016-06.io.spdk:host1", 00:14:39.012 "psk": "key0", 00:14:39.012 "method": "nvmf_subsystem_add_host", 00:14:39.012 "req_id": 1 00:14:39.012 } 00:14:39.012 Got JSON-RPC error response 00:14:39.012 response: 00:14:39.012 { 00:14:39.012 "code": -32603, 00:14:39.012 "message": "Internal error" 00:14:39.012 } 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71640 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71640 ']' 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71640 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71640 00:14:39.012 killing process with pid 71640 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71640' 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71640 00:14:39.012 07:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71640 00:14:39.272 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Qkew0YUEwh 00:14:39.272 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:39.272 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:39.272 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:39.272 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.272 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:39.272 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71705 00:14:39.272 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71705 00:14:39.272 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71705 ']' 00:14:39.272 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.272 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:39.272 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.272 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:39.272 07:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.272 [2024-11-08 07:41:57.076751] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:39.272 [2024-11-08 07:41:57.077872] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.272 [2024-11-08 07:41:57.229646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.531 [2024-11-08 07:41:57.280556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.531 [2024-11-08 07:41:57.280600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.531 [2024-11-08 07:41:57.280610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.531 [2024-11-08 07:41:57.280619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.531 [2024-11-08 07:41:57.280626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.531 [2024-11-08 07:41:57.280892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.531 [2024-11-08 07:41:57.323070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:40.099 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:40.099 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:40.099 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:40.099 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:40.099 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.358 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.358 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Qkew0YUEwh 00:14:40.358 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Qkew0YUEwh 00:14:40.358 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:40.617 [2024-11-08 07:41:58.324064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.617 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:40.875 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:40.875 [2024-11-08 07:41:58.788163] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:40.875 [2024-11-08 07:41:58.788368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:40.875 07:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:41.134 malloc0 00:14:41.134 07:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:41.393 07:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Qkew0YUEwh 00:14:41.653 07:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:41.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:41.911 07:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71766 00:14:41.911 07:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:41.911 07:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:41.911 07:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71766 /var/tmp/bdevperf.sock 00:14:41.911 07:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71766 ']' 00:14:41.911 07:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:41.911 07:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:41.911 07:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:41.911 07:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:41.911 07:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.911 [2024-11-08 07:41:59.738207] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:41.911 [2024-11-08 07:41:59.738451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71766 ] 00:14:42.170 [2024-11-08 07:41:59.886041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.170 [2024-11-08 07:41:59.945119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.170 [2024-11-08 07:41:59.993803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:42.737 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:42.737 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:42.738 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qkew0YUEwh 00:14:42.996 07:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:43.256 [2024-11-08 07:42:00.979368] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:43.256 TLSTESTn1 00:14:43.256 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:43.515 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:43.515 "subsystems": [ 00:14:43.515 { 00:14:43.515 "subsystem": "keyring", 00:14:43.515 "config": [ 00:14:43.515 { 00:14:43.515 "method": "keyring_file_add_key", 00:14:43.515 "params": { 00:14:43.515 "name": "key0", 00:14:43.515 "path": "/tmp/tmp.Qkew0YUEwh" 00:14:43.515 } 00:14:43.515 } 00:14:43.515 ] 00:14:43.515 }, 00:14:43.515 { 00:14:43.515 "subsystem": "iobuf", 00:14:43.515 "config": [ 00:14:43.515 { 00:14:43.515 "method": "iobuf_set_options", 00:14:43.515 "params": { 00:14:43.515 "small_pool_count": 8192, 00:14:43.515 "large_pool_count": 1024, 00:14:43.515 "small_bufsize": 8192, 00:14:43.515 "large_bufsize": 135168, 00:14:43.515 "enable_numa": false 00:14:43.515 } 00:14:43.515 } 00:14:43.515 ] 00:14:43.515 }, 00:14:43.515 { 00:14:43.515 "subsystem": "sock", 00:14:43.515 "config": [ 00:14:43.515 { 00:14:43.515 "method": "sock_set_default_impl", 00:14:43.515 "params": { 00:14:43.515 "impl_name": "uring" 00:14:43.515 } 00:14:43.515 }, 00:14:43.515 { 00:14:43.515 "method": "sock_impl_set_options", 00:14:43.515 "params": { 00:14:43.515 "impl_name": "ssl", 00:14:43.515 "recv_buf_size": 4096, 00:14:43.515 "send_buf_size": 4096, 00:14:43.515 "enable_recv_pipe": true, 00:14:43.515 "enable_quickack": false, 00:14:43.515 "enable_placement_id": 0, 00:14:43.515 "enable_zerocopy_send_server": true, 00:14:43.515 "enable_zerocopy_send_client": false, 00:14:43.515 "zerocopy_threshold": 0, 00:14:43.515 "tls_version": 0, 00:14:43.515 "enable_ktls": false 00:14:43.515 } 00:14:43.515 }, 00:14:43.515 { 00:14:43.515 "method": "sock_impl_set_options", 00:14:43.515 "params": { 00:14:43.515 "impl_name": "posix", 00:14:43.515 "recv_buf_size": 2097152, 00:14:43.515 "send_buf_size": 2097152, 00:14:43.515 "enable_recv_pipe": true, 00:14:43.515 "enable_quickack": false, 00:14:43.515 "enable_placement_id": 0, 00:14:43.515 "enable_zerocopy_send_server": true, 00:14:43.515 "enable_zerocopy_send_client": false, 00:14:43.515 "zerocopy_threshold": 0, 00:14:43.515 "tls_version": 0, 00:14:43.515 "enable_ktls": false 00:14:43.515 } 00:14:43.515 }, 00:14:43.515 { 00:14:43.515 "method": "sock_impl_set_options", 00:14:43.515 "params": { 00:14:43.515 "impl_name": "uring", 00:14:43.515 "recv_buf_size": 2097152, 00:14:43.515 "send_buf_size": 2097152, 00:14:43.515 "enable_recv_pipe": true, 00:14:43.515 "enable_quickack": false, 00:14:43.515 "enable_placement_id": 0, 00:14:43.515 "enable_zerocopy_send_server": false, 00:14:43.515 "enable_zerocopy_send_client": false, 00:14:43.515 "zerocopy_threshold": 0, 00:14:43.515 "tls_version": 0, 00:14:43.515 "enable_ktls": false 00:14:43.515 } 00:14:43.515 } 00:14:43.515 ] 00:14:43.515 }, 00:14:43.515 { 00:14:43.515 "subsystem": "vmd", 00:14:43.515 "config": [] 00:14:43.515 }, 00:14:43.515 { 00:14:43.515 "subsystem": "accel", 00:14:43.515 "config": [ 00:14:43.515 { 00:14:43.515 "method": "accel_set_options", 00:14:43.515 "params": { 00:14:43.515 "small_cache_size": 128, 00:14:43.515 "large_cache_size": 16, 00:14:43.515 "task_count": 2048, 00:14:43.515 "sequence_count": 2048, 00:14:43.515 "buf_count": 2048 00:14:43.515 } 00:14:43.515 } 00:14:43.515 ] 00:14:43.515 }, 00:14:43.515 { 00:14:43.515 "subsystem": "bdev", 00:14:43.515 "config": [ 00:14:43.515 { 00:14:43.515 "method": "bdev_set_options", 00:14:43.515 "params": { 00:14:43.515 "bdev_io_pool_size": 65535, 00:14:43.515 "bdev_io_cache_size": 256, 00:14:43.515 "bdev_auto_examine": true, 00:14:43.515 "iobuf_small_cache_size": 128, 00:14:43.515 "iobuf_large_cache_size": 16 00:14:43.515 } 00:14:43.515 }, 00:14:43.515 { 00:14:43.515 "method": "bdev_raid_set_options", 00:14:43.515 "params": { 00:14:43.515 "process_window_size_kb": 1024, 00:14:43.515 "process_max_bandwidth_mb_sec": 0 00:14:43.515 } 00:14:43.515 }, 00:14:43.515 { 00:14:43.515 "method": "bdev_iscsi_set_options", 00:14:43.515 "params": { 00:14:43.515 "timeout_sec": 30 00:14:43.515 } 00:14:43.515 }, 00:14:43.515 { 00:14:43.515 "method": "bdev_nvme_set_options", 00:14:43.515 "params": { 00:14:43.515 "action_on_timeout": "none", 00:14:43.515 "timeout_us": 0, 00:14:43.515 "timeout_admin_us": 0, 00:14:43.515 "keep_alive_timeout_ms": 10000, 00:14:43.515 "arbitration_burst": 0, 00:14:43.515 "low_priority_weight": 0, 00:14:43.515 "medium_priority_weight": 0, 00:14:43.515 "high_priority_weight": 0, 00:14:43.515 "nvme_adminq_poll_period_us": 10000, 00:14:43.515 "nvme_ioq_poll_period_us": 0, 00:14:43.515 "io_queue_requests": 0, 00:14:43.515 "delay_cmd_submit": true, 00:14:43.515 "transport_retry_count": 4, 00:14:43.515 "bdev_retry_count": 3, 00:14:43.515 "transport_ack_timeout": 0, 00:14:43.515 "ctrlr_loss_timeout_sec": 0, 00:14:43.515 "reconnect_delay_sec": 0, 00:14:43.515 "fast_io_fail_timeout_sec": 0, 00:14:43.515 "disable_auto_failback": false, 00:14:43.515 "generate_uuids": false, 00:14:43.515 "transport_tos": 0, 00:14:43.515 "nvme_error_stat": false, 00:14:43.515 "rdma_srq_size": 0, 00:14:43.515 "io_path_stat": false, 00:14:43.515 "allow_accel_sequence": false, 00:14:43.515 "rdma_max_cq_size": 0, 00:14:43.515 "rdma_cm_event_timeout_ms": 0, 00:14:43.515 "dhchap_digests": [ 00:14:43.515 "sha256", 00:14:43.515 "sha384", 00:14:43.515 "sha512" 00:14:43.515 ], 00:14:43.515 "dhchap_dhgroups": [ 00:14:43.515 "null", 00:14:43.515 "ffdhe2048", 00:14:43.515 "ffdhe3072", 00:14:43.515 "ffdhe4096", 00:14:43.515 "ffdhe6144", 00:14:43.515 "ffdhe8192" 00:14:43.515 ] 00:14:43.515 } 00:14:43.515 }, 00:14:43.515 { 00:14:43.515 "method": "bdev_nvme_set_hotplug", 00:14:43.515 "params": { 00:14:43.515 "period_us": 100000, 00:14:43.515 "enable": false 00:14:43.515 } 00:14:43.515 }, 00:14:43.515 { 00:14:43.515 "method": "bdev_malloc_create", 00:14:43.515 "params": { 00:14:43.515 "name": "malloc0", 00:14:43.515 "num_blocks": 8192, 00:14:43.515 "block_size": 4096, 00:14:43.515 "physical_block_size": 4096, 00:14:43.515 "uuid": "6e3259ed-0555-4a35-9bd1-9008bb9ad3a3", 00:14:43.515 "optimal_io_boundary": 0, 00:14:43.515 "md_size": 0, 00:14:43.516 "dif_type": 0, 00:14:43.516 "dif_is_head_of_md": false, 00:14:43.516 "dif_pi_format": 0 00:14:43.516 } 00:14:43.516 }, 00:14:43.516 { 00:14:43.516 "method": "bdev_wait_for_examine" 00:14:43.516 } 00:14:43.516 ] 00:14:43.516 }, 00:14:43.516 { 00:14:43.516 "subsystem": "nbd", 00:14:43.516 "config": [] 00:14:43.516 }, 00:14:43.516 { 00:14:43.516 "subsystem": "scheduler", 00:14:43.516 "config": [ 00:14:43.516 { 00:14:43.516 "method": "framework_set_scheduler", 00:14:43.516 "params": { 00:14:43.516 "name": "static" 00:14:43.516 } 00:14:43.516 } 00:14:43.516 ] 00:14:43.516 }, 00:14:43.516 { 00:14:43.516 "subsystem": "nvmf", 00:14:43.516 "config": [ 00:14:43.516 { 00:14:43.516 "method": "nvmf_set_config", 00:14:43.516 "params": { 00:14:43.516 "discovery_filter": "match_any", 00:14:43.516 "admin_cmd_passthru": { 00:14:43.516 "identify_ctrlr": false 00:14:43.516 }, 00:14:43.516 "dhchap_digests": [ 00:14:43.516 "sha256", 00:14:43.516 "sha384", 00:14:43.516 "sha512" 00:14:43.516 ], 00:14:43.516 "dhchap_dhgroups": [ 00:14:43.516 "null", 00:14:43.516 "ffdhe2048", 00:14:43.516 "ffdhe3072", 00:14:43.516 "ffdhe4096", 00:14:43.516 "ffdhe6144", 00:14:43.516 "ffdhe8192" 00:14:43.516 ] 00:14:43.516 } 00:14:43.516 }, 00:14:43.516 { 00:14:43.516 "method": "nvmf_set_max_subsystems", 00:14:43.516 "params": { 00:14:43.516 "max_subsystems": 1024 00:14:43.516 } 00:14:43.516 }, 00:14:43.516 { 00:14:43.516 "method": "nvmf_set_crdt", 00:14:43.516 "params": { 00:14:43.516 "crdt1": 0, 00:14:43.516 "crdt2": 0, 00:14:43.516 "crdt3": 0 00:14:43.516 } 00:14:43.516 }, 00:14:43.516 { 00:14:43.516 "method": "nvmf_create_transport", 00:14:43.516 "params": { 00:14:43.516 "trtype": "TCP", 00:14:43.516 "max_queue_depth": 128, 00:14:43.516 "max_io_qpairs_per_ctrlr": 127, 00:14:43.516 "in_capsule_data_size": 4096, 00:14:43.516 "max_io_size": 131072, 00:14:43.516 "io_unit_size": 131072, 00:14:43.516 "max_aq_depth": 128, 00:14:43.516 "num_shared_buffers": 511, 00:14:43.516 "buf_cache_size": 4294967295, 00:14:43.516 "dif_insert_or_strip": false, 00:14:43.516 "zcopy": false, 00:14:43.516 "c2h_success": false, 00:14:43.516 "sock_priority": 0, 00:14:43.516 "abort_timeout_sec": 1, 00:14:43.516 "ack_timeout": 0, 00:14:43.516 "data_wr_pool_size": 0 00:14:43.516 } 00:14:43.516 }, 00:14:43.516 { 00:14:43.516 "method": "nvmf_create_subsystem", 00:14:43.516 "params": { 00:14:43.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.516 "allow_any_host": false, 00:14:43.516 "serial_number": "SPDK00000000000001", 00:14:43.516 "model_number": "SPDK bdev Controller", 00:14:43.516 "max_namespaces": 10, 00:14:43.516 "min_cntlid": 1, 00:14:43.516 "max_cntlid": 65519, 00:14:43.516 "ana_reporting": false 00:14:43.516 } 00:14:43.516 }, 00:14:43.516 { 00:14:43.516 "method": "nvmf_subsystem_add_host", 00:14:43.516 "params": { 00:14:43.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.516 "host": "nqn.2016-06.io.spdk:host1", 00:14:43.516 "psk": "key0" 00:14:43.516 } 00:14:43.516 }, 00:14:43.516 { 00:14:43.516 "method": "nvmf_subsystem_add_ns", 00:14:43.516 "params": { 00:14:43.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.516 "namespace": { 00:14:43.516 "nsid": 1, 00:14:43.516 "bdev_name": "malloc0", 00:14:43.516 "nguid": "6E3259ED05554A359BD19008BB9AD3A3", 00:14:43.516 "uuid": "6e3259ed-0555-4a35-9bd1-9008bb9ad3a3", 00:14:43.516 "no_auto_visible": false 00:14:43.516 } 00:14:43.516 } 00:14:43.516 }, 00:14:43.516 { 00:14:43.516 "method": "nvmf_subsystem_add_listener", 00:14:43.516 "params": { 00:14:43.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.516 "listen_address": { 00:14:43.516 "trtype": "TCP", 00:14:43.516 "adrfam": "IPv4", 00:14:43.516 "traddr": "10.0.0.3", 00:14:43.516 "trsvcid": "4420" 00:14:43.516 }, 00:14:43.516 "secure_channel": true 00:14:43.516 } 00:14:43.516 } 00:14:43.516 ] 00:14:43.516 } 00:14:43.516 ] 00:14:43.516 }' 00:14:43.516 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:43.776 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:43.776 "subsystems": [ 00:14:43.776 { 00:14:43.776 "subsystem": "keyring", 00:14:43.776 "config": [ 00:14:43.776 { 00:14:43.777 "method": "keyring_file_add_key", 00:14:43.777 "params": { 00:14:43.777 "name": "key0", 00:14:43.777 "path": "/tmp/tmp.Qkew0YUEwh" 00:14:43.777 } 00:14:43.777 } 00:14:43.777 ] 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "subsystem": "iobuf", 00:14:43.777 "config": [ 00:14:43.777 { 00:14:43.777 "method": "iobuf_set_options", 00:14:43.777 "params": { 00:14:43.777 "small_pool_count": 8192, 00:14:43.777 "large_pool_count": 1024, 00:14:43.777 "small_bufsize": 8192, 00:14:43.777 "large_bufsize": 135168, 00:14:43.777 "enable_numa": false 00:14:43.777 } 00:14:43.777 } 00:14:43.777 ] 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "subsystem": "sock", 00:14:43.777 "config": [ 00:14:43.777 { 00:14:43.777 "method": "sock_set_default_impl", 00:14:43.777 "params": { 00:14:43.777 "impl_name": "uring" 00:14:43.777 } 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "method": "sock_impl_set_options", 00:14:43.777 "params": { 00:14:43.777 "impl_name": "ssl", 00:14:43.777 "recv_buf_size": 4096, 00:14:43.777 "send_buf_size": 4096, 00:14:43.777 "enable_recv_pipe": true, 00:14:43.777 "enable_quickack": false, 00:14:43.777 "enable_placement_id": 0, 00:14:43.777 "enable_zerocopy_send_server": true, 00:14:43.777 "enable_zerocopy_send_client": false, 00:14:43.777 "zerocopy_threshold": 0, 00:14:43.777 "tls_version": 0, 00:14:43.777 "enable_ktls": false 00:14:43.777 } 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "method": "sock_impl_set_options", 00:14:43.777 "params": { 00:14:43.777 "impl_name": "posix", 00:14:43.777 "recv_buf_size": 2097152, 00:14:43.777 "send_buf_size": 2097152, 00:14:43.777 "enable_recv_pipe": true, 00:14:43.777 "enable_quickack": false, 00:14:43.777 "enable_placement_id": 0, 00:14:43.777 "enable_zerocopy_send_server": true, 00:14:43.777 "enable_zerocopy_send_client": false, 00:14:43.777 "zerocopy_threshold": 0, 00:14:43.777 "tls_version": 0, 00:14:43.777 "enable_ktls": false 00:14:43.777 } 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "method": "sock_impl_set_options", 00:14:43.777 "params": { 00:14:43.777 "impl_name": "uring", 00:14:43.777 "recv_buf_size": 2097152, 00:14:43.777 "send_buf_size": 2097152, 00:14:43.777 "enable_recv_pipe": true, 00:14:43.777 "enable_quickack": false, 00:14:43.777 "enable_placement_id": 0, 00:14:43.777 "enable_zerocopy_send_server": false, 00:14:43.777 "enable_zerocopy_send_client": false, 00:14:43.777 "zerocopy_threshold": 0, 00:14:43.777 "tls_version": 0, 00:14:43.777 "enable_ktls": false 00:14:43.777 } 00:14:43.777 } 00:14:43.777 ] 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "subsystem": "vmd", 00:14:43.777 "config": [] 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "subsystem": "accel", 00:14:43.777 "config": [ 00:14:43.777 { 00:14:43.777 "method": "accel_set_options", 00:14:43.777 "params": { 00:14:43.777 "small_cache_size": 128, 00:14:43.777 "large_cache_size": 16, 00:14:43.777 "task_count": 2048, 00:14:43.777 "sequence_count": 2048, 00:14:43.777 "buf_count": 2048 00:14:43.777 } 00:14:43.777 } 00:14:43.777 ] 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "subsystem": "bdev", 00:14:43.777 "config": [ 00:14:43.777 { 00:14:43.777 "method": "bdev_set_options", 00:14:43.777 "params": { 00:14:43.777 "bdev_io_pool_size": 65535, 00:14:43.777 "bdev_io_cache_size": 256, 00:14:43.777 "bdev_auto_examine": true, 00:14:43.777 "iobuf_small_cache_size": 128, 00:14:43.777 "iobuf_large_cache_size": 16 00:14:43.777 } 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "method": "bdev_raid_set_options", 00:14:43.777 "params": { 00:14:43.777 "process_window_size_kb": 1024, 00:14:43.777 "process_max_bandwidth_mb_sec": 0 00:14:43.777 } 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "method": "bdev_iscsi_set_options", 00:14:43.777 "params": { 00:14:43.777 "timeout_sec": 30 00:14:43.777 } 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "method": "bdev_nvme_set_options", 00:14:43.777 "params": { 00:14:43.777 "action_on_timeout": "none", 00:14:43.777 "timeout_us": 0, 00:14:43.777 "timeout_admin_us": 0, 00:14:43.777 "keep_alive_timeout_ms": 10000, 00:14:43.777 "arbitration_burst": 0, 00:14:43.777 "low_priority_weight": 0, 00:14:43.777 "medium_priority_weight": 0, 00:14:43.777 "high_priority_weight": 0, 00:14:43.777 "nvme_adminq_poll_period_us": 10000, 00:14:43.777 "nvme_ioq_poll_period_us": 0, 00:14:43.777 "io_queue_requests": 512, 00:14:43.777 "delay_cmd_submit": true, 00:14:43.777 "transport_retry_count": 4, 00:14:43.777 "bdev_retry_count": 3, 00:14:43.777 "transport_ack_timeout": 0, 00:14:43.777 "ctrlr_loss_timeout_sec": 0, 00:14:43.777 "reconnect_delay_sec": 0, 00:14:43.777 "fast_io_fail_timeout_sec": 0, 00:14:43.777 "disable_auto_failback": false, 00:14:43.777 "generate_uuids": false, 00:14:43.777 "transport_tos": 0, 00:14:43.777 "nvme_error_stat": false, 00:14:43.777 "rdma_srq_size": 0, 00:14:43.777 "io_path_stat": false, 00:14:43.777 "allow_accel_sequence": false, 00:14:43.777 "rdma_max_cq_size": 0, 00:14:43.777 "rdma_cm_event_timeout_ms": 0, 00:14:43.777 "dhchap_digests": [ 00:14:43.777 "sha256", 00:14:43.777 "sha384", 00:14:43.777 "sha512" 00:14:43.777 ], 00:14:43.777 "dhchap_dhgroups": [ 00:14:43.777 "null", 00:14:43.777 "ffdhe2048", 00:14:43.777 "ffdhe3072", 00:14:43.777 "ffdhe4096", 00:14:43.777 "ffdhe6144", 00:14:43.777 "ffdhe8192" 00:14:43.777 ] 00:14:43.777 } 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "method": "bdev_nvme_attach_controller", 00:14:43.777 "params": { 00:14:43.777 "name": "TLSTEST", 00:14:43.777 "trtype": "TCP", 00:14:43.777 "adrfam": "IPv4", 00:14:43.777 "traddr": "10.0.0.3", 00:14:43.777 "trsvcid": "4420", 00:14:43.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.777 "prchk_reftag": false, 00:14:43.777 "prchk_guard": false, 00:14:43.777 "ctrlr_loss_timeout_sec": 0, 00:14:43.777 "reconnect_delay_sec": 0, 00:14:43.777 "fast_io_fail_timeout_sec": 0, 00:14:43.777 "psk": "key0", 00:14:43.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.777 "hdgst": false, 00:14:43.777 "ddgst": false, 00:14:43.777 "multipath": "multipath" 00:14:43.777 } 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "method": "bdev_nvme_set_hotplug", 00:14:43.777 "params": { 00:14:43.777 "period_us": 100000, 00:14:43.777 "enable": false 00:14:43.777 } 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "method": "bdev_wait_for_examine" 00:14:43.777 } 00:14:43.777 ] 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "subsystem": "nbd", 00:14:43.777 "config": [] 00:14:43.777 } 00:14:43.777 ] 00:14:43.777 }' 00:14:43.778 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71766 00:14:43.778 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71766 ']' 00:14:43.778 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71766 00:14:43.778 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:43.778 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:43.778 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71766 00:14:43.778 killing process with pid 71766 00:14:43.778 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.778 00:14:43.778 Latency(us) 00:14:43.778 [2024-11-08T07:42:01.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.778 [2024-11-08T07:42:01.739Z] =================================================================================================================== 00:14:43.778 [2024-11-08T07:42:01.739Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:43.778 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:43.778 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:43.778 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71766' 00:14:43.778 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71766 00:14:43.778 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71766 00:14:44.037 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71705 00:14:44.037 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71705 ']' 00:14:44.037 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71705 00:14:44.037 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:44.037 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:44.037 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71705 00:14:44.037 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:44.037 killing process with pid 71705 00:14:44.037 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:44.037 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71705' 00:14:44.037 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71705 00:14:44.037 07:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71705 00:14:44.297 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:44.297 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:44.297 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:44.297 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:44.297 "subsystems": [ 00:14:44.297 { 00:14:44.297 "subsystem": "keyring", 00:14:44.297 "config": [ 00:14:44.297 { 00:14:44.297 "method": "keyring_file_add_key", 00:14:44.297 "params": { 00:14:44.297 "name": "key0", 00:14:44.297 "path": "/tmp/tmp.Qkew0YUEwh" 00:14:44.297 } 00:14:44.297 } 00:14:44.297 ] 00:14:44.297 }, 00:14:44.297 { 00:14:44.297 "subsystem": "iobuf", 00:14:44.297 "config": [ 00:14:44.297 { 00:14:44.297 "method": "iobuf_set_options", 00:14:44.297 "params": { 00:14:44.297 "small_pool_count": 8192, 00:14:44.297 "large_pool_count": 1024, 00:14:44.297 "small_bufsize": 8192, 00:14:44.297 "large_bufsize": 135168, 00:14:44.297 "enable_numa": false 00:14:44.297 } 00:14:44.297 } 00:14:44.297 ] 00:14:44.297 }, 00:14:44.297 { 00:14:44.297 "subsystem": "sock", 00:14:44.297 "config": [ 00:14:44.297 { 00:14:44.297 "method": "sock_set_default_impl", 00:14:44.297 "params": { 00:14:44.297 "impl_name": "uring" 00:14:44.297 } 00:14:44.297 }, 00:14:44.297 { 00:14:44.297 "method": "sock_impl_set_options", 00:14:44.297 "params": { 00:14:44.297 "impl_name": "ssl", 00:14:44.297 "recv_buf_size": 4096, 00:14:44.297 "send_buf_size": 4096, 00:14:44.297 "enable_recv_pipe": true, 00:14:44.297 "enable_quickack": false, 00:14:44.297 "enable_placement_id": 0, 00:14:44.297 "enable_zerocopy_send_server": true, 00:14:44.297 "enable_zerocopy_send_client": false, 00:14:44.297 "zerocopy_threshold": 0, 00:14:44.297 "tls_version": 0, 00:14:44.297 "enable_ktls": false 00:14:44.297 } 00:14:44.297 }, 00:14:44.297 { 00:14:44.297 "method": "sock_impl_set_options", 00:14:44.297 "params": { 00:14:44.297 "impl_name": "posix", 00:14:44.297 "recv_buf_size": 2097152, 00:14:44.297 "send_buf_size": 2097152, 00:14:44.297 "enable_recv_pipe": true, 00:14:44.297 "enable_quickack": false, 00:14:44.297 "enable_placement_id": 0, 00:14:44.297 "enable_zerocopy_send_server": true, 00:14:44.297 "enable_zerocopy_send_client": false, 00:14:44.297 "zerocopy_threshold": 0, 00:14:44.297 "tls_version": 0, 00:14:44.297 "enable_ktls": false 00:14:44.297 } 00:14:44.297 }, 00:14:44.297 { 00:14:44.297 "method": "sock_impl_set_options", 00:14:44.297 "params": { 00:14:44.297 "impl_name": "uring", 00:14:44.297 "recv_buf_size": 2097152, 00:14:44.297 "send_buf_size": 2097152, 00:14:44.297 "enable_recv_pipe": true, 00:14:44.297 "enable_quickack": false, 00:14:44.297 "enable_placement_id": 0, 00:14:44.297 "enable_zerocopy_send_server": false, 00:14:44.297 "enable_zerocopy_send_client": false, 00:14:44.297 "zerocopy_threshold": 0, 00:14:44.298 "tls_version": 0, 00:14:44.298 "enable_ktls": false 00:14:44.298 } 00:14:44.298 } 00:14:44.298 ] 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "subsystem": "vmd", 00:14:44.298 "config": [] 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "subsystem": "accel", 00:14:44.298 "config": [ 00:14:44.298 { 00:14:44.298 "method": "accel_set_options", 00:14:44.298 "params": { 00:14:44.298 "small_cache_size": 128, 00:14:44.298 "large_cache_size": 16, 00:14:44.298 "task_count": 2048, 00:14:44.298 "sequence_count": 2048, 00:14:44.298 "buf_count": 2048 00:14:44.298 } 00:14:44.298 } 00:14:44.298 ] 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "subsystem": "bdev", 00:14:44.298 "config": [ 00:14:44.298 { 00:14:44.298 "method": "bdev_set_options", 00:14:44.298 "params": { 00:14:44.298 "bdev_io_pool_size": 65535, 00:14:44.298 "bdev_io_cache_size": 256, 00:14:44.298 "bdev_auto_examine": true, 00:14:44.298 "iobuf_small_cache_size": 128, 00:14:44.298 "iobuf_large_cache_size": 16 00:14:44.298 } 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "method": "bdev_raid_set_options", 00:14:44.298 "params": { 00:14:44.298 "process_window_size_kb": 1024, 00:14:44.298 "process_max_bandwidth_mb_sec": 0 00:14:44.298 } 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "method": "bdev_iscsi_set_options", 00:14:44.298 "params": { 00:14:44.298 "timeout_sec": 30 00:14:44.298 } 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "method": "bdev_nvme_set_options", 00:14:44.298 "params": { 00:14:44.298 "action_on_timeout": "none", 00:14:44.298 "timeout_us": 0, 00:14:44.298 "timeout_admin_us": 0, 00:14:44.298 "keep_alive_timeout_ms": 10000, 00:14:44.298 "arbitration_burst": 0, 00:14:44.298 "low_priority_weight": 0, 00:14:44.298 "medium_priority_weight": 0, 00:14:44.298 "high_priority_weight": 0, 00:14:44.298 "nvme_adminq_poll_period_us": 10000, 00:14:44.298 "nvme_ioq_poll_period_us": 0, 00:14:44.298 "io_queue_requests": 0, 00:14:44.298 "delay_cmd_submit": true, 00:14:44.298 "transport_retry_count": 4, 00:14:44.298 "bdev_retry_count": 3, 00:14:44.298 "transport_ack_timeout": 0, 00:14:44.298 "ctrlr_loss_timeout_sec": 0, 00:14:44.298 "reconnect_delay_sec": 0, 00:14:44.298 "fast_io_fail_timeout_sec": 0, 00:14:44.298 "disable_auto_failback": false, 00:14:44.298 "generate_uuids": false, 00:14:44.298 "transport_tos": 0, 00:14:44.298 "nvme_error_stat": false, 00:14:44.298 "rdma_srq_size": 0, 00:14:44.298 "io_path_stat": false, 00:14:44.298 "allow_accel_sequence": false, 00:14:44.298 "rdma_max_cq_size": 0, 00:14:44.298 "rdma_cm_event_timeout_ms": 0, 00:14:44.298 "dhchap_digests": [ 00:14:44.298 "sha256", 00:14:44.298 "sha384", 00:14:44.298 "sha512" 00:14:44.298 ], 00:14:44.298 "dhchap_dhgroups": [ 00:14:44.298 "null", 00:14:44.298 "ffdhe2048", 00:14:44.298 "ffdhe3072", 00:14:44.298 "ffdhe4096", 00:14:44.298 "ffdhe6144", 00:14:44.298 "ffdhe8192" 00:14:44.298 ] 00:14:44.298 } 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "method": "bdev_nvme_set_hotplug", 00:14:44.298 "params": { 00:14:44.298 "period_us": 100000, 00:14:44.298 "enable": false 00:14:44.298 } 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "method": "bdev_malloc_create", 00:14:44.298 "params": { 00:14:44.298 "name": "malloc0", 00:14:44.298 "num_blocks": 8192, 00:14:44.298 "block_size": 4096, 00:14:44.298 "physical_block_size": 4096, 00:14:44.298 "uuid": "6e3259ed-0555-4a35-9bd1-9008bb9ad3a3", 00:14:44.298 "optimal_io_boundary": 0, 00:14:44.298 "md_size": 0, 00:14:44.298 "dif_type": 0, 00:14:44.298 "dif_is_head_of_md": false, 00:14:44.298 "dif_pi_format": 0 00:14:44.298 } 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "method": "bdev_wait_for_examine" 00:14:44.298 } 00:14:44.298 ] 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "subsystem": "nbd", 00:14:44.298 "config": [] 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "subsystem": "scheduler", 00:14:44.298 "config": [ 00:14:44.298 { 00:14:44.298 "method": "framework_set_scheduler", 00:14:44.298 "params": { 00:14:44.298 "name": "static" 00:14:44.298 } 00:14:44.298 } 00:14:44.298 ] 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "subsystem": "nvmf", 00:14:44.298 "config": [ 00:14:44.298 { 00:14:44.298 "method": "nvmf_set_config", 00:14:44.298 "params": { 00:14:44.298 "discovery_filter": "match_any", 00:14:44.298 "admin_cmd_passthru": { 00:14:44.298 "identify_ctrlr": false 00:14:44.298 }, 00:14:44.298 "dhchap_digests": [ 00:14:44.298 "sha256", 00:14:44.298 "sha384", 00:14:44.298 "sha512" 00:14:44.298 ], 00:14:44.298 "dhchap_dhgroups": [ 00:14:44.298 "null", 00:14:44.298 "ffdhe2048", 00:14:44.298 "ffdhe3072", 00:14:44.298 "ffdhe4096", 00:14:44.298 "ffdhe6144", 00:14:44.298 "ffdhe8192" 00:14:44.298 ] 00:14:44.298 } 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "method": "nvmf_set_max_subsystems", 00:14:44.298 "params": { 00:14:44.298 "max_subsystems": 1024 00:14:44.298 } 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "method": "nvmf_set_crdt", 00:14:44.298 "params": { 00:14:44.298 "crdt1": 0, 00:14:44.298 "crdt2": 0, 00:14:44.298 "crdt3": 0 00:14:44.298 } 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "method": "nvmf_create_transport", 00:14:44.298 "params": { 00:14:44.298 "trtype": "TCP", 00:14:44.298 "max_queue_depth": 128, 00:14:44.298 "max_io_qpairs_per_ctrlr": 127, 00:14:44.298 "in_capsule_data_size": 4096, 00:14:44.298 "max_io_size": 131072, 00:14:44.298 "io_unit_size": 131072, 00:14:44.298 "max_aq_depth": 128, 00:14:44.298 "num_shared_buffers": 511, 00:14:44.298 "buf_cache_size": 4294967295, 00:14:44.298 "dif_insert_or_strip": false, 00:14:44.298 "zcopy": false, 00:14:44.298 "c2h_success": false, 00:14:44.298 "sock_priority": 0, 00:14:44.298 "abort_timeout_sec": 1, 00:14:44.298 "ack_timeout": 0, 00:14:44.298 "data_wr_pool_size": 0 00:14:44.298 } 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "method": "nvmf_create_subsystem", 00:14:44.298 "params": { 00:14:44.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.298 "allow_any_host": false, 00:14:44.298 "serial_number": "SPDK00000000000001", 00:14:44.298 "model_number": "SPDK bdev Controller", 00:14:44.298 "max_namespaces": 10, 00:14:44.298 "min_cntlid": 1, 00:14:44.298 "max_cntlid": 65519, 00:14:44.298 "ana_reporting": false 00:14:44.298 } 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "method": "nvmf_subsystem_add_host", 00:14:44.298 "params": { 00:14:44.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.298 "host": "nqn.2016-06.io.spdk:host1", 00:14:44.298 "psk": "key0" 00:14:44.298 } 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "method": "nvmf_subsystem_add_ns", 00:14:44.298 "params": { 00:14:44.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.298 "namespace": { 00:14:44.298 "nsid": 1, 00:14:44.298 "bdev_name": "malloc0", 00:14:44.298 "nguid": "6E3259ED05554A359BD19008BB9AD3A3", 00:14:44.298 "uuid": "6e3259ed-0555-4a35-9bd1-9008bb9ad3a3", 00:14:44.298 "no_auto_visible": false 00:14:44.298 } 00:14:44.298 } 00:14:44.298 }, 00:14:44.298 { 00:14:44.298 "method": "nvmf_subsystem_add_listener", 00:14:44.298 "params": { 00:14:44.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.298 "listen_address": { 00:14:44.298 "trtype": "TCP", 00:14:44.298 "adrfam": "IPv4", 00:14:44.298 "traddr": "10.0.0.3", 00:14:44.298 "trsvcid": "4420" 00:14:44.298 }, 00:14:44.298 "secure_channel": true 00:14:44.298 } 00:14:44.298 } 00:14:44.298 ] 00:14:44.298 } 00:14:44.298 ] 00:14:44.298 }' 00:14:44.298 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.299 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71810 00:14:44.299 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:44.299 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71810 00:14:44.299 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71810 ']' 00:14:44.299 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.299 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:44.299 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.299 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:44.299 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.299 [2024-11-08 07:42:02.136100] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:44.299 [2024-11-08 07:42:02.136170] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.558 [2024-11-08 07:42:02.277845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.558 [2024-11-08 07:42:02.326857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.558 [2024-11-08 07:42:02.327108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.558 [2024-11-08 07:42:02.327210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.558 [2024-11-08 07:42:02.327258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.558 [2024-11-08 07:42:02.327285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.558 [2024-11-08 07:42:02.327630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.558 [2024-11-08 07:42:02.483365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.817 [2024-11-08 07:42:02.553394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.817 [2024-11-08 07:42:02.585332] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:44.817 [2024-11-08 07:42:02.585521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:45.076 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:45.076 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:45.076 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:45.076 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:45.076 07:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.335 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.335 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71842 00:14:45.335 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:45.335 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71842 /var/tmp/bdevperf.sock 00:14:45.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:45.335 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:45.335 "subsystems": [ 00:14:45.335 { 00:14:45.335 "subsystem": "keyring", 00:14:45.335 "config": [ 00:14:45.335 { 00:14:45.335 "method": "keyring_file_add_key", 00:14:45.335 "params": { 00:14:45.335 "name": "key0", 00:14:45.335 "path": "/tmp/tmp.Qkew0YUEwh" 00:14:45.335 } 00:14:45.335 } 00:14:45.335 ] 00:14:45.335 }, 00:14:45.335 { 00:14:45.335 "subsystem": "iobuf", 00:14:45.335 "config": [ 00:14:45.335 { 00:14:45.335 "method": "iobuf_set_options", 00:14:45.335 "params": { 00:14:45.335 "small_pool_count": 8192, 00:14:45.335 "large_pool_count": 1024, 00:14:45.335 "small_bufsize": 8192, 00:14:45.335 "large_bufsize": 135168, 00:14:45.335 "enable_numa": false 00:14:45.335 } 00:14:45.335 } 00:14:45.335 ] 00:14:45.335 }, 00:14:45.335 { 00:14:45.335 "subsystem": "sock", 00:14:45.335 "config": [ 00:14:45.336 { 00:14:45.336 "method": "sock_set_default_impl", 00:14:45.336 "params": { 00:14:45.336 "impl_name": "uring" 00:14:45.336 } 00:14:45.336 }, 00:14:45.336 { 00:14:45.336 "method": "sock_impl_set_options", 00:14:45.336 "params": { 00:14:45.336 "impl_name": "ssl", 00:14:45.336 "recv_buf_size": 4096, 00:14:45.336 "send_buf_size": 4096, 00:14:45.336 "enable_recv_pipe": true, 00:14:45.336 "enable_quickack": false, 00:14:45.336 "enable_placement_id": 0, 00:14:45.336 "enable_zerocopy_send_server": true, 00:14:45.336 "enable_zerocopy_send_client": false, 00:14:45.336 "zerocopy_threshold": 0, 00:14:45.336 "tls_version": 0, 00:14:45.336 "enable_ktls": false 00:14:45.336 } 00:14:45.336 }, 00:14:45.336 { 00:14:45.336 "method": "sock_impl_set_options", 00:14:45.336 "params": { 00:14:45.336 "impl_name": "posix", 00:14:45.336 "recv_buf_size": 2097152, 00:14:45.336 "send_buf_size": 2097152, 00:14:45.336 "enable_recv_pipe": true, 00:14:45.336 "enable_quickack": false, 00:14:45.336 "enable_placement_id": 0, 00:14:45.336 "enable_zerocopy_send_server": true, 00:14:45.336 "enable_zerocopy_send_client": false, 00:14:45.336 "zerocopy_threshold": 0, 00:14:45.336 "tls_version": 0, 00:14:45.336 "enable_ktls": false 00:14:45.336 } 00:14:45.336 }, 00:14:45.336 { 00:14:45.336 "method": "sock_impl_set_options", 00:14:45.336 "params": { 00:14:45.336 "impl_name": "uring", 00:14:45.336 "recv_buf_size": 2097152, 00:14:45.336 "send_buf_size": 2097152, 00:14:45.336 "enable_recv_pipe": true, 00:14:45.336 "enable_quickack": false, 00:14:45.336 "enable_placement_id": 0, 00:14:45.336 "enable_zerocopy_send_server": false, 00:14:45.336 "enable_zerocopy_send_client": false, 00:14:45.336 "zerocopy_threshold": 0, 00:14:45.336 "tls_version": 0, 00:14:45.336 "enable_ktls": false 00:14:45.336 } 00:14:45.336 } 00:14:45.336 ] 00:14:45.336 }, 00:14:45.336 { 00:14:45.336 "subsystem": "vmd", 00:14:45.336 "config": [] 00:14:45.336 }, 00:14:45.336 { 00:14:45.336 "subsystem": "accel", 00:14:45.336 "config": [ 00:14:45.336 { 00:14:45.336 "method": "accel_set_options", 00:14:45.336 "params": { 00:14:45.336 "small_cache_size": 128, 00:14:45.336 "large_cache_size": 16, 00:14:45.336 "task_count": 2048, 00:14:45.336 "sequence_count": 2048, 00:14:45.336 "buf_count": 2048 00:14:45.336 } 00:14:45.336 } 00:14:45.336 ] 00:14:45.336 }, 00:14:45.336 { 00:14:45.336 "subsystem": "bdev", 00:14:45.336 "config": [ 00:14:45.336 { 00:14:45.336 "method": "bdev_set_options", 00:14:45.336 "params": { 00:14:45.336 "bdev_io_pool_size": 65535, 00:14:45.336 "bdev_io_cache_size": 256, 00:14:45.336 "bdev_auto_examine": true, 00:14:45.336 "iobuf_small_cache_size": 128, 00:14:45.336 "iobuf_large_cache_size": 16 00:14:45.336 } 00:14:45.336 }, 00:14:45.336 { 00:14:45.336 "method": "bdev_raid_set_options", 00:14:45.336 "params": { 00:14:45.336 "process_window_size_kb": 1024, 00:14:45.336 "process_max_bandwidth_mb_sec": 0 00:14:45.336 } 00:14:45.336 }, 00:14:45.336 { 00:14:45.336 "method": "bdev_iscsi_set_options", 00:14:45.336 "params": { 00:14:45.336 "timeout_sec": 30 00:14:45.336 } 00:14:45.336 }, 00:14:45.336 { 00:14:45.336 "method": "bdev_nvme_set_options", 00:14:45.336 "params": { 00:14:45.336 "action_on_timeout": "none", 00:14:45.336 "timeout_us": 0, 00:14:45.336 "timeout_admin_us": 0, 00:14:45.336 "keep_alive_timeout_ms": 10000, 00:14:45.336 "arbitration_burst": 0, 00:14:45.336 "low_priority_weight": 0, 00:14:45.336 "medium_priority_weight": 0, 00:14:45.336 "high_priority_weight": 0, 00:14:45.336 "nvme_adminq_poll_period_us": 10000, 00:14:45.336 "nvme_ioq_poll_period_us": 0, 00:14:45.336 "io_queue_requests": 512, 00:14:45.336 "delay_cmd_submit": true, 00:14:45.336 "transport_retry_count": 4, 00:14:45.336 "bdev_retry_count": 3, 00:14:45.336 "transport_ack_timeout": 0, 00:14:45.336 "ctrlr_loss_timeout_sec": 0, 00:14:45.336 "reconnect_delay_sec": 0, 00:14:45.336 "fast_io_fail_timeout_sec": 0, 00:14:45.336 "disable_auto_failback": false, 00:14:45.336 "generate_uuids": false, 00:14:45.336 "transport_tos": 0, 00:14:45.336 "nvme_error_stat": false, 00:14:45.336 "rdma_srq_size": 0, 00:14:45.336 "io_path_stat": false, 00:14:45.336 "allow_accel_sequence": false, 00:14:45.336 "rdma_max_cq_size": 0, 00:14:45.336 "rdma_cm_event_timeout_ms": 0, 00:14:45.336 "dhchap_digests": [ 00:14:45.336 "sha256", 00:14:45.336 "sha384", 00:14:45.336 "sha512" 00:14:45.336 ], 00:14:45.336 "dhchap_dhgroups": [ 00:14:45.336 "null", 00:14:45.336 "ffdhe2048", 00:14:45.336 "ffdhe3072", 00:14:45.336 "ffdhe4096", 00:14:45.336 "ffdhe6144", 00:14:45.336 "ffdhe8192" 00:14:45.336 ] 00:14:45.336 } 00:14:45.336 }, 00:14:45.336 { 00:14:45.336 "method": "bdev_nvme_attach_controller", 00:14:45.336 "params": { 00:14:45.336 "name": "TLSTEST", 00:14:45.336 "trtype": "TCP", 00:14:45.336 "adrfam": "IPv4", 00:14:45.336 "traddr": "10.0.0.3", 00:14:45.336 "trsvcid": "4420", 00:14:45.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.336 "prchk_reftag": false, 00:14:45.336 "prchk_guard": false, 00:14:45.337 "ctrlr_loss_timeout_sec": 0, 00:14:45.337 "reconnect_delay_sec": 0, 00:14:45.337 "fast_io_fail_timeout_sec": 0, 00:14:45.337 "psk": "key0", 00:14:45.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:45.337 "hdgst": false, 00:14:45.337 "ddgst": false, 00:14:45.337 "multipath": "multipath" 00:14:45.337 } 00:14:45.337 }, 00:14:45.337 { 00:14:45.337 "method": "bdev_nvme_set_hotplug", 00:14:45.337 "params": { 00:14:45.337 "period_us": 100000, 00:14:45.337 "enable": false 00:14:45.337 } 00:14:45.337 }, 00:14:45.337 { 00:14:45.337 "method": "bdev_wait_for_examine" 00:14:45.337 } 00:14:45.337 ] 00:14:45.337 }, 00:14:45.337 { 00:14:45.337 "subsystem": "nbd", 00:14:45.337 "config": [] 00:14:45.337 } 00:14:45.337 ] 00:14:45.337 }' 00:14:45.337 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71842 ']' 00:14:45.337 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:45.337 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:45.337 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:45.337 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:45.337 07:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.337 [2024-11-08 07:42:03.096339] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:45.337 [2024-11-08 07:42:03.096435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71842 ] 00:14:45.337 [2024-11-08 07:42:03.246769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.337 [2024-11-08 07:42:03.291040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.595 [2024-11-08 07:42:03.414080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.595 [2024-11-08 07:42:03.454918] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:46.235 07:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:46.235 07:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:46.235 07:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:46.235 Running I/O for 10 seconds... 00:14:48.559 5710.00 IOPS, 22.30 MiB/s [2024-11-08T07:42:07.456Z] 5709.50 IOPS, 22.30 MiB/s [2024-11-08T07:42:08.392Z] 5708.33 IOPS, 22.30 MiB/s [2024-11-08T07:42:09.329Z] 5770.00 IOPS, 22.54 MiB/s [2024-11-08T07:42:10.265Z] 5805.60 IOPS, 22.68 MiB/s [2024-11-08T07:42:11.201Z] 5824.33 IOPS, 22.75 MiB/s [2024-11-08T07:42:12.578Z] 5837.71 IOPS, 22.80 MiB/s [2024-11-08T07:42:13.515Z] 5837.62 IOPS, 22.80 MiB/s [2024-11-08T07:42:14.451Z] 5838.44 IOPS, 22.81 MiB/s [2024-11-08T07:42:14.451Z] 5852.80 IOPS, 22.86 MiB/s 00:14:56.490 Latency(us) 00:14:56.490 [2024-11-08T07:42:14.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.490 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:56.490 Verification LBA range: start 0x0 length 0x2000 00:14:56.490 TLSTESTn1 : 10.01 5858.68 22.89 0.00 0.00 21814.54 4369.07 16976.94 00:14:56.490 [2024-11-08T07:42:14.451Z] =================================================================================================================== 00:14:56.490 [2024-11-08T07:42:14.451Z] Total : 5858.68 22.89 0.00 0.00 21814.54 4369.07 16976.94 00:14:56.490 { 00:14:56.490 "results": [ 00:14:56.490 { 00:14:56.490 "job": "TLSTESTn1", 00:14:56.490 "core_mask": "0x4", 00:14:56.490 "workload": "verify", 00:14:56.490 "status": "finished", 00:14:56.490 "verify_range": { 00:14:56.490 "start": 0, 00:14:56.490 "length": 8192 00:14:56.490 }, 00:14:56.490 "queue_depth": 128, 00:14:56.490 "io_size": 4096, 00:14:56.490 "runtime": 10.011816, 00:14:56.490 "iops": 5858.677386799757, 00:14:56.490 "mibps": 22.88545854218655, 00:14:56.490 "io_failed": 0, 00:14:56.490 "io_timeout": 0, 00:14:56.490 "avg_latency_us": 21814.542837399007, 00:14:56.490 "min_latency_us": 4369.066666666667, 00:14:56.490 "max_latency_us": 16976.94476190476 00:14:56.490 } 00:14:56.490 ], 00:14:56.490 "core_count": 1 00:14:56.490 } 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71842 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71842 ']' 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71842 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71842 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71842' 00:14:56.490 killing process with pid 71842 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71842 00:14:56.490 Received shutdown signal, test time was about 10.000000 seconds 00:14:56.490 00:14:56.490 Latency(us) 00:14:56.490 [2024-11-08T07:42:14.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.490 [2024-11-08T07:42:14.451Z] =================================================================================================================== 00:14:56.490 [2024-11-08T07:42:14.451Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71842 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71810 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71810 ']' 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71810 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71810 00:14:56.490 killing process with pid 71810 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71810' 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71810 00:14:56.490 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71810 00:14:56.749 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:56.749 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:56.749 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:56.749 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.749 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71978 00:14:56.749 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:56.749 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71978 00:14:56.749 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71978 ']' 00:14:56.749 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.749 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:56.749 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.749 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:56.749 07:42:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.749 [2024-11-08 07:42:14.700783] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:56.749 [2024-11-08 07:42:14.700918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.008 [2024-11-08 07:42:14.852927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.008 [2024-11-08 07:42:14.896253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.008 [2024-11-08 07:42:14.896299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.008 [2024-11-08 07:42:14.896309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.008 [2024-11-08 07:42:14.896317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.008 [2024-11-08 07:42:14.896340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.008 [2024-11-08 07:42:14.896619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.008 [2024-11-08 07:42:14.937963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:57.946 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:57.946 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:57.946 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:57.946 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:57.946 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.946 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.946 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Qkew0YUEwh 00:14:57.946 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Qkew0YUEwh 00:14:57.946 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:57.946 [2024-11-08 07:42:15.890119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.210 07:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:58.513 07:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:58.513 [2024-11-08 07:42:16.410468] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:58.513 [2024-11-08 07:42:16.410665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:58.513 07:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:58.786 malloc0 00:14:58.786 07:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:59.045 07:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Qkew0YUEwh 00:14:59.305 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:59.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:59.305 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72034 00:14:59.305 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:59.305 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:59.305 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72034 /var/tmp/bdevperf.sock 00:14:59.305 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72034 ']' 00:14:59.305 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:59.305 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:59.305 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:59.305 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:59.305 07:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:59.305 [2024-11-08 07:42:17.260144] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:14:59.305 [2024-11-08 07:42:17.260475] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72034 ] 00:14:59.564 [2024-11-08 07:42:17.411187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.564 [2024-11-08 07:42:17.454840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.564 [2024-11-08 07:42:17.497105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:00.132 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:00.132 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:00.132 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qkew0YUEwh 00:15:00.391 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:00.651 [2024-11-08 07:42:18.533221] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:00.651 nvme0n1 00:15:00.910 07:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:00.910 Running I/O for 1 seconds... 00:15:01.848 5878.00 IOPS, 22.96 MiB/s 00:15:01.848 Latency(us) 00:15:01.848 [2024-11-08T07:42:19.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.848 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:01.848 Verification LBA range: start 0x0 length 0x2000 00:15:01.848 nvme0n1 : 1.01 5941.43 23.21 0.00 0.00 21406.26 3760.52 16103.13 00:15:01.848 [2024-11-08T07:42:19.809Z] =================================================================================================================== 00:15:01.848 [2024-11-08T07:42:19.809Z] Total : 5941.43 23.21 0.00 0.00 21406.26 3760.52 16103.13 00:15:01.848 { 00:15:01.848 "results": [ 00:15:01.848 { 00:15:01.848 "job": "nvme0n1", 00:15:01.848 "core_mask": "0x2", 00:15:01.848 "workload": "verify", 00:15:01.848 "status": "finished", 00:15:01.848 "verify_range": { 00:15:01.848 "start": 0, 00:15:01.848 "length": 8192 00:15:01.848 }, 00:15:01.848 "queue_depth": 128, 00:15:01.848 "io_size": 4096, 00:15:01.848 "runtime": 1.010868, 00:15:01.848 "iops": 5941.42855447002, 00:15:01.848 "mibps": 23.208705290898514, 00:15:01.848 "io_failed": 0, 00:15:01.848 "io_timeout": 0, 00:15:01.848 "avg_latency_us": 21406.26120292406, 00:15:01.848 "min_latency_us": 3760.518095238095, 00:15:01.848 "max_latency_us": 16103.131428571429 00:15:01.848 } 00:15:01.848 ], 00:15:01.848 "core_count": 1 00:15:01.848 } 00:15:01.848 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72034 00:15:01.848 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72034 ']' 00:15:01.848 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72034 00:15:01.848 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:01.848 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:01.848 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72034 00:15:02.107 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:02.107 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:02.107 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72034' 00:15:02.107 killing process with pid 72034 00:15:02.107 Received shutdown signal, test time was about 1.000000 seconds 00:15:02.107 00:15:02.107 Latency(us) 00:15:02.107 [2024-11-08T07:42:20.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.107 [2024-11-08T07:42:20.068Z] =================================================================================================================== 00:15:02.107 [2024-11-08T07:42:20.068Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:02.107 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72034 00:15:02.107 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72034 00:15:02.107 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 71978 00:15:02.107 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71978 ']' 00:15:02.107 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71978 00:15:02.107 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:02.107 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:02.107 07:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71978 00:15:02.107 killing process with pid 71978 00:15:02.107 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:02.107 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:02.107 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71978' 00:15:02.107 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71978 00:15:02.107 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71978 00:15:02.366 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:02.366 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:02.366 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:02.366 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.366 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72085 00:15:02.366 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:02.366 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72085 00:15:02.366 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72085 ']' 00:15:02.366 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.366 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:02.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.366 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.366 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:02.366 07:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.366 [2024-11-08 07:42:20.263139] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:02.366 [2024-11-08 07:42:20.263233] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.624 [2024-11-08 07:42:20.412372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.624 [2024-11-08 07:42:20.457658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.624 [2024-11-08 07:42:20.457703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.624 [2024-11-08 07:42:20.457712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.624 [2024-11-08 07:42:20.457720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.624 [2024-11-08 07:42:20.457727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.624 [2024-11-08 07:42:20.458005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.624 [2024-11-08 07:42:20.499080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.562 [2024-11-08 07:42:21.267402] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.562 malloc0 00:15:03.562 [2024-11-08 07:42:21.295930] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:03.562 [2024-11-08 07:42:21.296257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72117 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72117 /var/tmp/bdevperf.sock 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72117 ']' 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:03.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:03.562 07:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.562 [2024-11-08 07:42:21.368027] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:03.562 [2024-11-08 07:42:21.368239] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72117 ] 00:15:03.562 [2024-11-08 07:42:21.519053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.821 [2024-11-08 07:42:21.575797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.821 [2024-11-08 07:42:21.624158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:04.389 07:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:04.389 07:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:04.389 07:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qkew0YUEwh 00:15:04.648 07:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:04.907 [2024-11-08 07:42:22.772735] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:04.907 nvme0n1 00:15:04.907 07:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:05.166 Running I/O for 1 seconds... 00:15:06.104 5879.00 IOPS, 22.96 MiB/s 00:15:06.104 Latency(us) 00:15:06.104 [2024-11-08T07:42:24.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.104 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:06.104 Verification LBA range: start 0x0 length 0x2000 00:15:06.104 nvme0n1 : 1.01 5935.02 23.18 0.00 0.00 21431.49 3635.69 17226.61 00:15:06.104 [2024-11-08T07:42:24.065Z] =================================================================================================================== 00:15:06.104 [2024-11-08T07:42:24.065Z] Total : 5935.02 23.18 0.00 0.00 21431.49 3635.69 17226.61 00:15:06.104 { 00:15:06.104 "results": [ 00:15:06.104 { 00:15:06.104 "job": "nvme0n1", 00:15:06.104 "core_mask": "0x2", 00:15:06.104 "workload": "verify", 00:15:06.104 "status": "finished", 00:15:06.104 "verify_range": { 00:15:06.104 "start": 0, 00:15:06.104 "length": 8192 00:15:06.104 }, 00:15:06.104 "queue_depth": 128, 00:15:06.104 "io_size": 4096, 00:15:06.104 "runtime": 1.012297, 00:15:06.104 "iops": 5935.017094785424, 00:15:06.104 "mibps": 23.183660526505562, 00:15:06.104 "io_failed": 0, 00:15:06.104 "io_timeout": 0, 00:15:06.104 "avg_latency_us": 21431.490852831146, 00:15:06.104 "min_latency_us": 3635.687619047619, 00:15:06.104 "max_latency_us": 17226.605714285713 00:15:06.104 } 00:15:06.104 ], 00:15:06.104 "core_count": 1 00:15:06.104 } 00:15:06.104 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:06.104 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.104 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.364 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.364 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:06.364 "subsystems": [ 00:15:06.364 { 00:15:06.364 "subsystem": "keyring", 00:15:06.364 "config": [ 00:15:06.364 { 00:15:06.364 "method": "keyring_file_add_key", 00:15:06.364 "params": { 00:15:06.364 "name": "key0", 00:15:06.364 "path": "/tmp/tmp.Qkew0YUEwh" 00:15:06.364 } 00:15:06.364 } 00:15:06.364 ] 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "subsystem": "iobuf", 00:15:06.364 "config": [ 00:15:06.364 { 00:15:06.364 "method": "iobuf_set_options", 00:15:06.364 "params": { 00:15:06.364 "small_pool_count": 8192, 00:15:06.364 "large_pool_count": 1024, 00:15:06.364 "small_bufsize": 8192, 00:15:06.364 "large_bufsize": 135168, 00:15:06.364 "enable_numa": false 00:15:06.364 } 00:15:06.364 } 00:15:06.364 ] 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "subsystem": "sock", 00:15:06.364 "config": [ 00:15:06.364 { 00:15:06.364 "method": "sock_set_default_impl", 00:15:06.364 "params": { 00:15:06.364 "impl_name": "uring" 00:15:06.364 } 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "method": "sock_impl_set_options", 00:15:06.364 "params": { 00:15:06.364 "impl_name": "ssl", 00:15:06.364 "recv_buf_size": 4096, 00:15:06.364 "send_buf_size": 4096, 00:15:06.364 "enable_recv_pipe": true, 00:15:06.364 "enable_quickack": false, 00:15:06.364 "enable_placement_id": 0, 00:15:06.364 "enable_zerocopy_send_server": true, 00:15:06.364 "enable_zerocopy_send_client": false, 00:15:06.364 "zerocopy_threshold": 0, 00:15:06.364 "tls_version": 0, 00:15:06.364 "enable_ktls": false 00:15:06.364 } 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "method": "sock_impl_set_options", 00:15:06.364 "params": { 00:15:06.364 "impl_name": "posix", 00:15:06.364 "recv_buf_size": 2097152, 00:15:06.364 "send_buf_size": 2097152, 00:15:06.364 "enable_recv_pipe": true, 00:15:06.364 "enable_quickack": false, 00:15:06.364 "enable_placement_id": 0, 00:15:06.364 "enable_zerocopy_send_server": true, 00:15:06.364 "enable_zerocopy_send_client": false, 00:15:06.364 "zerocopy_threshold": 0, 00:15:06.364 "tls_version": 0, 00:15:06.364 "enable_ktls": false 00:15:06.364 } 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "method": "sock_impl_set_options", 00:15:06.364 "params": { 00:15:06.364 "impl_name": "uring", 00:15:06.364 "recv_buf_size": 2097152, 00:15:06.364 "send_buf_size": 2097152, 00:15:06.364 "enable_recv_pipe": true, 00:15:06.364 "enable_quickack": false, 00:15:06.364 "enable_placement_id": 0, 00:15:06.364 "enable_zerocopy_send_server": false, 00:15:06.364 "enable_zerocopy_send_client": false, 00:15:06.364 "zerocopy_threshold": 0, 00:15:06.364 "tls_version": 0, 00:15:06.364 "enable_ktls": false 00:15:06.364 } 00:15:06.364 } 00:15:06.364 ] 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "subsystem": "vmd", 00:15:06.364 "config": [] 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "subsystem": "accel", 00:15:06.364 "config": [ 00:15:06.364 { 00:15:06.364 "method": "accel_set_options", 00:15:06.364 "params": { 00:15:06.364 "small_cache_size": 128, 00:15:06.364 "large_cache_size": 16, 00:15:06.364 "task_count": 2048, 00:15:06.364 "sequence_count": 2048, 00:15:06.364 "buf_count": 2048 00:15:06.364 } 00:15:06.364 } 00:15:06.364 ] 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "subsystem": "bdev", 00:15:06.364 "config": [ 00:15:06.364 { 00:15:06.364 "method": "bdev_set_options", 00:15:06.364 "params": { 00:15:06.364 "bdev_io_pool_size": 65535, 00:15:06.364 "bdev_io_cache_size": 256, 00:15:06.364 "bdev_auto_examine": true, 00:15:06.364 "iobuf_small_cache_size": 128, 00:15:06.364 "iobuf_large_cache_size": 16 00:15:06.364 } 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "method": "bdev_raid_set_options", 00:15:06.364 "params": { 00:15:06.364 "process_window_size_kb": 1024, 00:15:06.364 "process_max_bandwidth_mb_sec": 0 00:15:06.364 } 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "method": "bdev_iscsi_set_options", 00:15:06.364 "params": { 00:15:06.364 "timeout_sec": 30 00:15:06.364 } 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "method": "bdev_nvme_set_options", 00:15:06.364 "params": { 00:15:06.364 "action_on_timeout": "none", 00:15:06.364 "timeout_us": 0, 00:15:06.364 "timeout_admin_us": 0, 00:15:06.364 "keep_alive_timeout_ms": 10000, 00:15:06.364 "arbitration_burst": 0, 00:15:06.364 "low_priority_weight": 0, 00:15:06.364 "medium_priority_weight": 0, 00:15:06.364 "high_priority_weight": 0, 00:15:06.364 "nvme_adminq_poll_period_us": 10000, 00:15:06.364 "nvme_ioq_poll_period_us": 0, 00:15:06.364 "io_queue_requests": 0, 00:15:06.364 "delay_cmd_submit": true, 00:15:06.364 "transport_retry_count": 4, 00:15:06.364 "bdev_retry_count": 3, 00:15:06.364 "transport_ack_timeout": 0, 00:15:06.364 "ctrlr_loss_timeout_sec": 0, 00:15:06.364 "reconnect_delay_sec": 0, 00:15:06.364 "fast_io_fail_timeout_sec": 0, 00:15:06.364 "disable_auto_failback": false, 00:15:06.364 "generate_uuids": false, 00:15:06.364 "transport_tos": 0, 00:15:06.364 "nvme_error_stat": false, 00:15:06.364 "rdma_srq_size": 0, 00:15:06.364 "io_path_stat": false, 00:15:06.364 "allow_accel_sequence": false, 00:15:06.364 "rdma_max_cq_size": 0, 00:15:06.364 "rdma_cm_event_timeout_ms": 0, 00:15:06.364 "dhchap_digests": [ 00:15:06.364 "sha256", 00:15:06.364 "sha384", 00:15:06.364 "sha512" 00:15:06.364 ], 00:15:06.364 "dhchap_dhgroups": [ 00:15:06.364 "null", 00:15:06.364 "ffdhe2048", 00:15:06.364 "ffdhe3072", 00:15:06.364 "ffdhe4096", 00:15:06.364 "ffdhe6144", 00:15:06.364 "ffdhe8192" 00:15:06.364 ] 00:15:06.364 } 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "method": "bdev_nvme_set_hotplug", 00:15:06.364 "params": { 00:15:06.364 "period_us": 100000, 00:15:06.364 "enable": false 00:15:06.364 } 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "method": "bdev_malloc_create", 00:15:06.364 "params": { 00:15:06.364 "name": "malloc0", 00:15:06.364 "num_blocks": 8192, 00:15:06.364 "block_size": 4096, 00:15:06.364 "physical_block_size": 4096, 00:15:06.364 "uuid": "e6ecc18d-5f5f-468c-b008-efec6f644e96", 00:15:06.364 "optimal_io_boundary": 0, 00:15:06.364 "md_size": 0, 00:15:06.364 "dif_type": 0, 00:15:06.364 "dif_is_head_of_md": false, 00:15:06.364 "dif_pi_format": 0 00:15:06.364 } 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "method": "bdev_wait_for_examine" 00:15:06.364 } 00:15:06.364 ] 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "subsystem": "nbd", 00:15:06.364 "config": [] 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "subsystem": "scheduler", 00:15:06.364 "config": [ 00:15:06.364 { 00:15:06.364 "method": "framework_set_scheduler", 00:15:06.364 "params": { 00:15:06.364 "name": "static" 00:15:06.364 } 00:15:06.364 } 00:15:06.364 ] 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "subsystem": "nvmf", 00:15:06.364 "config": [ 00:15:06.364 { 00:15:06.364 "method": "nvmf_set_config", 00:15:06.364 "params": { 00:15:06.364 "discovery_filter": "match_any", 00:15:06.364 "admin_cmd_passthru": { 00:15:06.364 "identify_ctrlr": false 00:15:06.364 }, 00:15:06.364 "dhchap_digests": [ 00:15:06.364 "sha256", 00:15:06.364 "sha384", 00:15:06.364 "sha512" 00:15:06.364 ], 00:15:06.364 "dhchap_dhgroups": [ 00:15:06.364 "null", 00:15:06.364 "ffdhe2048", 00:15:06.364 "ffdhe3072", 00:15:06.364 "ffdhe4096", 00:15:06.364 "ffdhe6144", 00:15:06.364 "ffdhe8192" 00:15:06.364 ] 00:15:06.364 } 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "method": "nvmf_set_max_subsystems", 00:15:06.364 "params": { 00:15:06.364 "max_subsystems": 1024 00:15:06.364 } 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "method": "nvmf_set_crdt", 00:15:06.364 "params": { 00:15:06.364 "crdt1": 0, 00:15:06.364 "crdt2": 0, 00:15:06.364 "crdt3": 0 00:15:06.364 } 00:15:06.364 }, 00:15:06.364 { 00:15:06.364 "method": "nvmf_create_transport", 00:15:06.365 "params": { 00:15:06.365 "trtype": "TCP", 00:15:06.365 "max_queue_depth": 128, 00:15:06.365 "max_io_qpairs_per_ctrlr": 127, 00:15:06.365 "in_capsule_data_size": 4096, 00:15:06.365 "max_io_size": 131072, 00:15:06.365 "io_unit_size": 131072, 00:15:06.365 "max_aq_depth": 128, 00:15:06.365 "num_shared_buffers": 511, 00:15:06.365 "buf_cache_size": 4294967295, 00:15:06.365 "dif_insert_or_strip": false, 00:15:06.365 "zcopy": false, 00:15:06.365 "c2h_success": false, 00:15:06.365 "sock_priority": 0, 00:15:06.365 "abort_timeout_sec": 1, 00:15:06.365 "ack_timeout": 0, 00:15:06.365 "data_wr_pool_size": 0 00:15:06.365 } 00:15:06.365 }, 00:15:06.365 { 00:15:06.365 "method": "nvmf_create_subsystem", 00:15:06.365 "params": { 00:15:06.365 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.365 "allow_any_host": false, 00:15:06.365 "serial_number": "00000000000000000000", 00:15:06.365 "model_number": "SPDK bdev Controller", 00:15:06.365 "max_namespaces": 32, 00:15:06.365 "min_cntlid": 1, 00:15:06.365 "max_cntlid": 65519, 00:15:06.365 "ana_reporting": false 00:15:06.365 } 00:15:06.365 }, 00:15:06.365 { 00:15:06.365 "method": "nvmf_subsystem_add_host", 00:15:06.365 "params": { 00:15:06.365 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.365 "host": "nqn.2016-06.io.spdk:host1", 00:15:06.365 "psk": "key0" 00:15:06.365 } 00:15:06.365 }, 00:15:06.365 { 00:15:06.365 "method": "nvmf_subsystem_add_ns", 00:15:06.365 "params": { 00:15:06.365 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.365 "namespace": { 00:15:06.365 "nsid": 1, 00:15:06.365 "bdev_name": "malloc0", 00:15:06.365 "nguid": "E6ECC18D5F5F468CB008EFEC6F644E96", 00:15:06.365 "uuid": "e6ecc18d-5f5f-468c-b008-efec6f644e96", 00:15:06.365 "no_auto_visible": false 00:15:06.365 } 00:15:06.365 } 00:15:06.365 }, 00:15:06.365 { 00:15:06.365 "method": "nvmf_subsystem_add_listener", 00:15:06.365 "params": { 00:15:06.365 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.365 "listen_address": { 00:15:06.365 "trtype": "TCP", 00:15:06.365 "adrfam": "IPv4", 00:15:06.365 "traddr": "10.0.0.3", 00:15:06.365 "trsvcid": "4420" 00:15:06.365 }, 00:15:06.365 "secure_channel": false, 00:15:06.365 "sock_impl": "ssl" 00:15:06.365 } 00:15:06.365 } 00:15:06.365 ] 00:15:06.365 } 00:15:06.365 ] 00:15:06.365 }' 00:15:06.365 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:06.625 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:06.625 "subsystems": [ 00:15:06.625 { 00:15:06.625 "subsystem": "keyring", 00:15:06.625 "config": [ 00:15:06.625 { 00:15:06.625 "method": "keyring_file_add_key", 00:15:06.625 "params": { 00:15:06.625 "name": "key0", 00:15:06.625 "path": "/tmp/tmp.Qkew0YUEwh" 00:15:06.625 } 00:15:06.625 } 00:15:06.625 ] 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "subsystem": "iobuf", 00:15:06.625 "config": [ 00:15:06.625 { 00:15:06.625 "method": "iobuf_set_options", 00:15:06.625 "params": { 00:15:06.625 "small_pool_count": 8192, 00:15:06.625 "large_pool_count": 1024, 00:15:06.625 "small_bufsize": 8192, 00:15:06.625 "large_bufsize": 135168, 00:15:06.625 "enable_numa": false 00:15:06.625 } 00:15:06.625 } 00:15:06.625 ] 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "subsystem": "sock", 00:15:06.625 "config": [ 00:15:06.625 { 00:15:06.625 "method": "sock_set_default_impl", 00:15:06.625 "params": { 00:15:06.625 "impl_name": "uring" 00:15:06.625 } 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "method": "sock_impl_set_options", 00:15:06.625 "params": { 00:15:06.625 "impl_name": "ssl", 00:15:06.625 "recv_buf_size": 4096, 00:15:06.625 "send_buf_size": 4096, 00:15:06.625 "enable_recv_pipe": true, 00:15:06.625 "enable_quickack": false, 00:15:06.625 "enable_placement_id": 0, 00:15:06.625 "enable_zerocopy_send_server": true, 00:15:06.625 "enable_zerocopy_send_client": false, 00:15:06.625 "zerocopy_threshold": 0, 00:15:06.625 "tls_version": 0, 00:15:06.625 "enable_ktls": false 00:15:06.625 } 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "method": "sock_impl_set_options", 00:15:06.625 "params": { 00:15:06.625 "impl_name": "posix", 00:15:06.625 "recv_buf_size": 2097152, 00:15:06.625 "send_buf_size": 2097152, 00:15:06.625 "enable_recv_pipe": true, 00:15:06.625 "enable_quickack": false, 00:15:06.625 "enable_placement_id": 0, 00:15:06.625 "enable_zerocopy_send_server": true, 00:15:06.625 "enable_zerocopy_send_client": false, 00:15:06.625 "zerocopy_threshold": 0, 00:15:06.625 "tls_version": 0, 00:15:06.625 "enable_ktls": false 00:15:06.625 } 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "method": "sock_impl_set_options", 00:15:06.625 "params": { 00:15:06.625 "impl_name": "uring", 00:15:06.625 "recv_buf_size": 2097152, 00:15:06.625 "send_buf_size": 2097152, 00:15:06.625 "enable_recv_pipe": true, 00:15:06.625 "enable_quickack": false, 00:15:06.625 "enable_placement_id": 0, 00:15:06.625 "enable_zerocopy_send_server": false, 00:15:06.625 "enable_zerocopy_send_client": false, 00:15:06.625 "zerocopy_threshold": 0, 00:15:06.625 "tls_version": 0, 00:15:06.625 "enable_ktls": false 00:15:06.625 } 00:15:06.625 } 00:15:06.625 ] 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "subsystem": "vmd", 00:15:06.625 "config": [] 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "subsystem": "accel", 00:15:06.625 "config": [ 00:15:06.625 { 00:15:06.625 "method": "accel_set_options", 00:15:06.625 "params": { 00:15:06.625 "small_cache_size": 128, 00:15:06.625 "large_cache_size": 16, 00:15:06.625 "task_count": 2048, 00:15:06.625 "sequence_count": 2048, 00:15:06.625 "buf_count": 2048 00:15:06.625 } 00:15:06.625 } 00:15:06.625 ] 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "subsystem": "bdev", 00:15:06.625 "config": [ 00:15:06.625 { 00:15:06.625 "method": "bdev_set_options", 00:15:06.625 "params": { 00:15:06.625 "bdev_io_pool_size": 65535, 00:15:06.625 "bdev_io_cache_size": 256, 00:15:06.625 "bdev_auto_examine": true, 00:15:06.625 "iobuf_small_cache_size": 128, 00:15:06.625 "iobuf_large_cache_size": 16 00:15:06.625 } 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "method": "bdev_raid_set_options", 00:15:06.625 "params": { 00:15:06.625 "process_window_size_kb": 1024, 00:15:06.625 "process_max_bandwidth_mb_sec": 0 00:15:06.625 } 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "method": "bdev_iscsi_set_options", 00:15:06.625 "params": { 00:15:06.625 "timeout_sec": 30 00:15:06.625 } 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "method": "bdev_nvme_set_options", 00:15:06.625 "params": { 00:15:06.625 "action_on_timeout": "none", 00:15:06.625 "timeout_us": 0, 00:15:06.625 "timeout_admin_us": 0, 00:15:06.625 "keep_alive_timeout_ms": 10000, 00:15:06.625 "arbitration_burst": 0, 00:15:06.625 "low_priority_weight": 0, 00:15:06.625 "medium_priority_weight": 0, 00:15:06.625 "high_priority_weight": 0, 00:15:06.625 "nvme_adminq_poll_period_us": 10000, 00:15:06.625 "nvme_ioq_poll_period_us": 0, 00:15:06.625 "io_queue_requests": 512, 00:15:06.625 "delay_cmd_submit": true, 00:15:06.625 "transport_retry_count": 4, 00:15:06.625 "bdev_retry_count": 3, 00:15:06.625 "transport_ack_timeout": 0, 00:15:06.625 "ctrlr_loss_timeout_sec": 0, 00:15:06.625 "reconnect_delay_sec": 0, 00:15:06.625 "fast_io_fail_timeout_sec": 0, 00:15:06.625 "disable_auto_failback": false, 00:15:06.625 "generate_uuids": false, 00:15:06.625 "transport_tos": 0, 00:15:06.625 "nvme_error_stat": false, 00:15:06.625 "rdma_srq_size": 0, 00:15:06.625 "io_path_stat": false, 00:15:06.625 "allow_accel_sequence": false, 00:15:06.625 "rdma_max_cq_size": 0, 00:15:06.625 "rdma_cm_event_timeout_ms": 0, 00:15:06.625 "dhchap_digests": [ 00:15:06.625 "sha256", 00:15:06.625 "sha384", 00:15:06.625 "sha512" 00:15:06.625 ], 00:15:06.625 "dhchap_dhgroups": [ 00:15:06.625 "null", 00:15:06.625 "ffdhe2048", 00:15:06.625 "ffdhe3072", 00:15:06.625 "ffdhe4096", 00:15:06.625 "ffdhe6144", 00:15:06.625 "ffdhe8192" 00:15:06.625 ] 00:15:06.625 } 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "method": "bdev_nvme_attach_controller", 00:15:06.625 "params": { 00:15:06.625 "name": "nvme0", 00:15:06.625 "trtype": "TCP", 00:15:06.625 "adrfam": "IPv4", 00:15:06.625 "traddr": "10.0.0.3", 00:15:06.625 "trsvcid": "4420", 00:15:06.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.625 "prchk_reftag": false, 00:15:06.625 "prchk_guard": false, 00:15:06.625 "ctrlr_loss_timeout_sec": 0, 00:15:06.625 "reconnect_delay_sec": 0, 00:15:06.625 "fast_io_fail_timeout_sec": 0, 00:15:06.625 "psk": "key0", 00:15:06.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:06.625 "hdgst": false, 00:15:06.625 "ddgst": false, 00:15:06.625 "multipath": "multipath" 00:15:06.625 } 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "method": "bdev_nvme_set_hotplug", 00:15:06.625 "params": { 00:15:06.625 "period_us": 100000, 00:15:06.625 "enable": false 00:15:06.625 } 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "method": "bdev_enable_histogram", 00:15:06.625 "params": { 00:15:06.625 "name": "nvme0n1", 00:15:06.625 "enable": true 00:15:06.625 } 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "method": "bdev_wait_for_examine" 00:15:06.625 } 00:15:06.625 ] 00:15:06.625 }, 00:15:06.625 { 00:15:06.625 "subsystem": "nbd", 00:15:06.625 "config": [] 00:15:06.625 } 00:15:06.625 ] 00:15:06.625 }' 00:15:06.625 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72117 00:15:06.625 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72117 ']' 00:15:06.625 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72117 00:15:06.626 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:06.626 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:06.626 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72117 00:15:06.626 killing process with pid 72117 00:15:06.626 Received shutdown signal, test time was about 1.000000 seconds 00:15:06.626 00:15:06.626 Latency(us) 00:15:06.626 [2024-11-08T07:42:24.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.626 [2024-11-08T07:42:24.587Z] =================================================================================================================== 00:15:06.626 [2024-11-08T07:42:24.587Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:06.626 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:06.626 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:06.626 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72117' 00:15:06.626 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72117 00:15:06.626 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72117 00:15:06.884 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72085 00:15:06.884 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72085 ']' 00:15:06.884 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72085 00:15:06.884 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:06.884 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:06.884 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72085 00:15:06.884 killing process with pid 72085 00:15:06.884 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:06.884 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:06.884 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72085' 00:15:06.884 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72085 00:15:06.884 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72085 00:15:07.144 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:07.144 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:07.144 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:07.144 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:07.144 "subsystems": [ 00:15:07.144 { 00:15:07.144 "subsystem": "keyring", 00:15:07.144 "config": [ 00:15:07.144 { 00:15:07.144 "method": "keyring_file_add_key", 00:15:07.144 "params": { 00:15:07.144 "name": "key0", 00:15:07.144 "path": "/tmp/tmp.Qkew0YUEwh" 00:15:07.144 } 00:15:07.144 } 00:15:07.144 ] 00:15:07.144 }, 00:15:07.144 { 00:15:07.144 "subsystem": "iobuf", 00:15:07.144 "config": [ 00:15:07.144 { 00:15:07.144 "method": "iobuf_set_options", 00:15:07.144 "params": { 00:15:07.144 "small_pool_count": 8192, 00:15:07.144 "large_pool_count": 1024, 00:15:07.144 "small_bufsize": 8192, 00:15:07.144 "large_bufsize": 135168, 00:15:07.144 "enable_numa": false 00:15:07.144 } 00:15:07.144 } 00:15:07.144 ] 00:15:07.144 }, 00:15:07.144 { 00:15:07.144 "subsystem": "sock", 00:15:07.144 "config": [ 00:15:07.144 { 00:15:07.144 "method": "sock_set_default_impl", 00:15:07.144 "params": { 00:15:07.144 "impl_name": "uring" 00:15:07.144 } 00:15:07.144 }, 00:15:07.144 { 00:15:07.144 "method": "sock_impl_set_options", 00:15:07.144 "params": { 00:15:07.144 "impl_name": "ssl", 00:15:07.144 "recv_buf_size": 4096, 00:15:07.144 "send_buf_size": 4096, 00:15:07.144 "enable_recv_pipe": true, 00:15:07.144 "enable_quickack": false, 00:15:07.144 "enable_placement_id": 0, 00:15:07.144 "enable_zerocopy_send_server": true, 00:15:07.144 "enable_zerocopy_send_client": false, 00:15:07.144 "zerocopy_threshold": 0, 00:15:07.144 "tls_version": 0, 00:15:07.144 "enable_ktls": false 00:15:07.144 } 00:15:07.144 }, 00:15:07.144 { 00:15:07.144 "method": "sock_impl_set_options", 00:15:07.144 "params": { 00:15:07.144 "impl_name": "posix", 00:15:07.144 "recv_buf_size": 2097152, 00:15:07.144 "send_buf_size": 2097152, 00:15:07.144 "enable_recv_pipe": true, 00:15:07.144 "enable_quickack": false, 00:15:07.144 "enable_placement_id": 0, 00:15:07.144 "enable_zerocopy_send_server": true, 00:15:07.144 "enable_zerocopy_send_client": false, 00:15:07.144 "zerocopy_threshold": 0, 00:15:07.144 "tls_version": 0, 00:15:07.144 "enable_ktls": false 00:15:07.144 } 00:15:07.144 }, 00:15:07.144 { 00:15:07.144 "method": "sock_impl_set_options", 00:15:07.144 "params": { 00:15:07.144 "impl_name": "uring", 00:15:07.144 "recv_buf_size": 2097152, 00:15:07.144 "send_buf_size": 2097152, 00:15:07.144 "enable_recv_pipe": true, 00:15:07.144 "enable_quickack": false, 00:15:07.144 "enable_placement_id": 0, 00:15:07.144 "enable_zerocopy_send_server": false, 00:15:07.144 "enable_zerocopy_send_client": false, 00:15:07.144 "zerocopy_threshold": 0, 00:15:07.144 "tls_version": 0, 00:15:07.144 "enable_ktls": false 00:15:07.144 } 00:15:07.144 } 00:15:07.144 ] 00:15:07.144 }, 00:15:07.144 { 00:15:07.144 "subsystem": "vmd", 00:15:07.144 "config": [] 00:15:07.144 }, 00:15:07.144 { 00:15:07.144 "subsystem": "accel", 00:15:07.144 "config": [ 00:15:07.144 { 00:15:07.144 "method": "accel_set_options", 00:15:07.144 "params": { 00:15:07.144 "small_cache_size": 128, 00:15:07.144 "large_cache_size": 16, 00:15:07.144 "task_count": 2048, 00:15:07.144 "sequence_count": 2048, 00:15:07.144 "buf_count": 2048 00:15:07.144 } 00:15:07.144 } 00:15:07.144 ] 00:15:07.144 }, 00:15:07.144 { 00:15:07.144 "subsystem": "bdev", 00:15:07.144 "config": [ 00:15:07.144 { 00:15:07.144 "method": "bdev_set_options", 00:15:07.144 "params": { 00:15:07.144 "bdev_io_pool_size": 65535, 00:15:07.144 "bdev_io_cache_size": 256, 00:15:07.144 "bdev_auto_examine": true, 00:15:07.145 "iobuf_small_cache_size": 128, 00:15:07.145 "iobuf_large_cache_size": 16 00:15:07.145 } 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "method": "bdev_raid_set_options", 00:15:07.145 "params": { 00:15:07.145 "process_window_size_kb": 1024, 00:15:07.145 "process_max_bandwidth_mb_sec": 0 00:15:07.145 } 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "method": "bdev_iscsi_set_options", 00:15:07.145 "params": { 00:15:07.145 "timeout_sec": 30 00:15:07.145 } 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "method": "bdev_nvme_set_options", 00:15:07.145 "params": { 00:15:07.145 "action_on_timeout": "none", 00:15:07.145 "timeout_us": 0, 00:15:07.145 "timeout_admin_us": 0, 00:15:07.145 "keep_alive_timeout_ms": 10000, 00:15:07.145 "arbitration_burst": 0, 00:15:07.145 "low_priority_weight": 0, 00:15:07.145 "medium_priority_weight": 0, 00:15:07.145 "high_priority_weight": 0, 00:15:07.145 "nvme_adminq_poll_period_us": 10000, 00:15:07.145 "nvme_ioq_poll_period_us": 0, 00:15:07.145 "io_queue_requests": 0, 00:15:07.145 "delay_cmd_submit": true, 00:15:07.145 "transport_retry_count": 4, 00:15:07.145 "bdev_retry_count": 3, 00:15:07.145 "transport_ack_timeout": 0, 00:15:07.145 "ctrlr_loss_timeout_sec": 0, 00:15:07.145 "reconnect_delay_sec": 0, 00:15:07.145 "fast_io_fail_timeout_sec": 0, 00:15:07.145 "disable_auto_failback": false, 00:15:07.145 "generate_uuids": false, 00:15:07.145 "transport_tos": 0, 00:15:07.145 "nvme_error_stat": false, 00:15:07.145 "rdma_srq_size": 0, 00:15:07.145 "io_path_stat": false, 00:15:07.145 "allow_accel_sequence": false, 00:15:07.145 "rdma_max_cq_size": 0, 00:15:07.145 "rdma_cm_event_timeout_ms": 0, 00:15:07.145 "dhchap_digests": [ 00:15:07.145 "sha256", 00:15:07.145 "sha384", 00:15:07.145 "sha512" 00:15:07.145 ], 00:15:07.145 "dhchap_dhgroups": [ 00:15:07.145 "null", 00:15:07.145 "ffdhe2048", 00:15:07.145 "ffdhe3072", 00:15:07.145 "ffdhe4096", 00:15:07.145 "ffdhe6144", 00:15:07.145 "ffdhe8192" 00:15:07.145 ] 00:15:07.145 } 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "method": "bdev_nvme_set_hotplug", 00:15:07.145 "params": { 00:15:07.145 "period_us": 100000, 00:15:07.145 "enable": false 00:15:07.145 } 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "method": "bdev_malloc_create", 00:15:07.145 "params": { 00:15:07.145 "name": "malloc0", 00:15:07.145 "num_blocks": 8192, 00:15:07.145 "block_size": 4096, 00:15:07.145 "physical_block_size": 4096, 00:15:07.145 "uuid": "e6ecc18d-5f5f-468c-b008-efec6f644e96", 00:15:07.145 "optimal_io_boundary": 0, 00:15:07.145 "md_size": 0, 00:15:07.145 "dif_type": 0, 00:15:07.145 "dif_is_head_of_md": false, 00:15:07.145 "dif_pi_format": 0 00:15:07.145 } 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "method": "bdev_wait_for_examine" 00:15:07.145 } 00:15:07.145 ] 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "subsystem": "nbd", 00:15:07.145 "config": [] 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "subsystem": "scheduler", 00:15:07.145 "config": [ 00:15:07.145 { 00:15:07.145 "method": "framework_set_scheduler", 00:15:07.145 "params": { 00:15:07.145 "name": "static" 00:15:07.145 } 00:15:07.145 } 00:15:07.145 ] 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "subsystem": "nvmf", 00:15:07.145 "config": [ 00:15:07.145 { 00:15:07.145 "method": "nvmf_set_config", 00:15:07.145 "params": { 00:15:07.145 "discovery_filter": "match_any", 00:15:07.145 "admin_cmd_passthru": { 00:15:07.145 "identify_ctrlr": false 00:15:07.145 }, 00:15:07.145 "dhchap_digests": [ 00:15:07.145 "sha256", 00:15:07.145 "sha384", 00:15:07.145 "sha512" 00:15:07.145 ], 00:15:07.145 "dhchap_dhgroups": [ 00:15:07.145 "null", 00:15:07.145 "ffdhe2048", 00:15:07.145 "ffdhe3072", 00:15:07.145 "ffdhe4096", 00:15:07.145 "ffdhe6144", 00:15:07.145 "ffdhe8192" 00:15:07.145 ] 00:15:07.145 } 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "method": "nvmf_set_max_subsystems", 00:15:07.145 "params": { 00:15:07.145 "max_subsystems": 1024 00:15:07.145 } 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "method": "nvmf_set_crdt", 00:15:07.145 "params": { 00:15:07.145 "crdt1": 0, 00:15:07.145 "crdt2": 0, 00:15:07.145 "crdt3": 0 00:15:07.145 } 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "method": "nvmf_create_transport", 00:15:07.145 "params": { 00:15:07.145 "trtype": "TCP", 00:15:07.145 "max_queue_depth": 128, 00:15:07.145 "max_io_qpairs_per_ctrlr": 127, 00:15:07.145 "in_capsule_data_size": 4096, 00:15:07.145 "max_io_size": 131072, 00:15:07.145 "io_unit_size": 131072, 00:15:07.145 "max_aq_depth": 128, 00:15:07.145 "num_shared_buffers": 511, 00:15:07.145 "buf_cache_size": 4294967295, 00:15:07.145 "dif_insert_or_strip": false, 00:15:07.145 "zcopy": false, 00:15:07.145 "c2h_success": false, 00:15:07.145 "sock_priority": 0, 00:15:07.145 "abort_timeout_sec": 1, 00:15:07.145 "ack_timeout": 0, 00:15:07.145 "data_wr_pool_size": 0 00:15:07.145 } 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "method": "nvmf_create_subsystem", 00:15:07.145 "params": { 00:15:07.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.145 "allow_any_host": false, 00:15:07.145 "serial_number": "00000000000000000000", 00:15:07.145 "model_number": "SPDK bdev Controller", 00:15:07.145 "max_namespaces": 32, 00:15:07.145 "min_cntlid": 1, 00:15:07.145 "max_cntlid": 65519, 00:15:07.145 "ana_reporting": false 00:15:07.145 } 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "method": "nvmf_subsystem_add_host", 00:15:07.145 "params": { 00:15:07.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.145 "host": "nqn.2016-06.io.spdk:host1", 00:15:07.145 "psk": "key0" 00:15:07.145 } 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "method": "nvmf_subsystem_add_ns", 00:15:07.145 "params": { 00:15:07.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.145 "namespace": { 00:15:07.145 "nsid": 1, 00:15:07.145 "bdev_name": "malloc0", 00:15:07.145 "nguid": "E6ECC18D5F5F468CB008EFEC6F644E96", 00:15:07.145 "uuid": "e6ecc18d-5f5f-468c-b008-efec6f644e96", 00:15:07.145 "no_auto_visible": false 00:15:07.145 } 00:15:07.145 } 00:15:07.145 }, 00:15:07.145 { 00:15:07.145 "method": "nvmf_subsystem_add_listener", 00:15:07.145 "params": { 00:15:07.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.145 "listen_address": { 00:15:07.145 "trtype": "TCP", 00:15:07.145 "adrfam": "IPv4", 00:15:07.145 "traddr": "10.0.0.3", 00:15:07.145 "trsvcid": "4420" 00:15:07.145 }, 00:15:07.145 "secure_channel": false, 00:15:07.145 "sock_impl": "ssl" 00:15:07.145 } 00:15:07.145 } 00:15:07.145 ] 00:15:07.145 } 00:15:07.145 ] 00:15:07.145 }' 00:15:07.145 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.145 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72172 00:15:07.145 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72172 00:15:07.145 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72172 ']' 00:15:07.145 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:07.145 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.145 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:07.145 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.145 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:07.145 07:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.145 [2024-11-08 07:42:24.954569] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:07.145 [2024-11-08 07:42:24.954809] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.145 [2024-11-08 07:42:25.097837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.405 [2024-11-08 07:42:25.140253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.405 [2024-11-08 07:42:25.140519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.405 [2024-11-08 07:42:25.140646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:07.405 [2024-11-08 07:42:25.140658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:07.405 [2024-11-08 07:42:25.140666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.405 [2024-11-08 07:42:25.141002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.405 [2024-11-08 07:42:25.295248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:07.664 [2024-11-08 07:42:25.364261] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.664 [2024-11-08 07:42:25.396214] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:07.664 [2024-11-08 07:42:25.396518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:08.232 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:08.232 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:08.232 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:08.232 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:08.232 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.232 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.232 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72204 00:15:08.232 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72204 /var/tmp/bdevperf.sock 00:15:08.232 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:08.232 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72204 ']' 00:15:08.232 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:08.232 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:08.232 "subsystems": [ 00:15:08.232 { 00:15:08.232 "subsystem": "keyring", 00:15:08.232 "config": [ 00:15:08.232 { 00:15:08.232 "method": "keyring_file_add_key", 00:15:08.232 "params": { 00:15:08.232 "name": "key0", 00:15:08.232 "path": "/tmp/tmp.Qkew0YUEwh" 00:15:08.232 } 00:15:08.232 } 00:15:08.232 ] 00:15:08.232 }, 00:15:08.232 { 00:15:08.232 "subsystem": "iobuf", 00:15:08.232 "config": [ 00:15:08.232 { 00:15:08.232 "method": "iobuf_set_options", 00:15:08.232 "params": { 00:15:08.232 "small_pool_count": 8192, 00:15:08.232 "large_pool_count": 1024, 00:15:08.232 "small_bufsize": 8192, 00:15:08.232 "large_bufsize": 135168, 00:15:08.232 "enable_numa": false 00:15:08.232 } 00:15:08.232 } 00:15:08.232 ] 00:15:08.232 }, 00:15:08.232 { 00:15:08.232 "subsystem": "sock", 00:15:08.232 "config": [ 00:15:08.232 { 00:15:08.232 "method": "sock_set_default_impl", 00:15:08.232 "params": { 00:15:08.232 "impl_name": "uring" 00:15:08.232 } 00:15:08.232 }, 00:15:08.232 { 00:15:08.232 "method": "sock_impl_set_options", 00:15:08.232 "params": { 00:15:08.232 "impl_name": "ssl", 00:15:08.232 "recv_buf_size": 4096, 00:15:08.232 "send_buf_size": 4096, 00:15:08.232 "enable_recv_pipe": true, 00:15:08.232 "enable_quickack": false, 00:15:08.232 "enable_placement_id": 0, 00:15:08.232 "enable_zerocopy_send_server": true, 00:15:08.232 "enable_zerocopy_send_client": false, 00:15:08.232 "zerocopy_threshold": 0, 00:15:08.232 "tls_version": 0, 00:15:08.232 "enable_ktls": false 00:15:08.232 } 00:15:08.232 }, 00:15:08.232 { 00:15:08.232 "method": "sock_impl_set_options", 00:15:08.232 "params": { 00:15:08.232 "impl_name": "posix", 00:15:08.232 "recv_buf_size": 2097152, 00:15:08.232 "send_buf_size": 2097152, 00:15:08.232 "enable_recv_pipe": true, 00:15:08.232 "enable_quickack": false, 00:15:08.232 "enable_placement_id": 0, 00:15:08.232 "enable_zerocopy_send_server": true, 00:15:08.232 "enable_zerocopy_send_client": false, 00:15:08.232 "zerocopy_threshold": 0, 00:15:08.232 "tls_version": 0, 00:15:08.232 "enable_ktls": false 00:15:08.232 } 00:15:08.232 }, 00:15:08.232 { 00:15:08.232 "method": "sock_impl_set_options", 00:15:08.232 "params": { 00:15:08.232 "impl_name": "uring", 00:15:08.232 "recv_buf_size": 2097152, 00:15:08.232 "send_buf_size": 2097152, 00:15:08.232 "enable_recv_pipe": true, 00:15:08.232 "enable_quickack": false, 00:15:08.232 "enable_placement_id": 0, 00:15:08.232 "enable_zerocopy_send_server": false, 00:15:08.232 "enable_zerocopy_send_client": false, 00:15:08.232 "zerocopy_threshold": 0, 00:15:08.232 "tls_version": 0, 00:15:08.232 "enable_ktls": false 00:15:08.232 } 00:15:08.232 } 00:15:08.232 ] 00:15:08.232 }, 00:15:08.233 { 00:15:08.233 "subsystem": "vmd", 00:15:08.233 "config": [] 00:15:08.233 }, 00:15:08.233 { 00:15:08.233 "subsystem": "accel", 00:15:08.233 "config": [ 00:15:08.233 { 00:15:08.233 "method": "accel_set_options", 00:15:08.233 "params": { 00:15:08.233 "small_cache_size": 128, 00:15:08.233 "large_cache_size": 16, 00:15:08.233 "task_count": 2048, 00:15:08.233 "sequence_count": 2048, 00:15:08.233 "buf_count": 2048 00:15:08.233 } 00:15:08.233 } 00:15:08.233 ] 00:15:08.233 }, 00:15:08.233 { 00:15:08.233 "subsystem": "bdev", 00:15:08.233 "config": [ 00:15:08.233 { 00:15:08.233 "method": "bdev_set_options", 00:15:08.233 "params": { 00:15:08.233 "bdev_io_pool_size": 65535, 00:15:08.233 "bdev_io_cache_size": 256, 00:15:08.233 "bdev_auto_examine": true, 00:15:08.233 "iobuf_small_cache_size": 128, 00:15:08.233 "iobuf_large_cache_size": 16 00:15:08.233 } 00:15:08.233 }, 00:15:08.233 { 00:15:08.233 "method": "bdev_raid_set_options", 00:15:08.233 "params": { 00:15:08.233 "process_window_size_kb": 1024, 00:15:08.233 "process_max_bandwidth_mb_sec": 0 00:15:08.233 } 00:15:08.233 }, 00:15:08.233 { 00:15:08.233 "method": "bdev_iscsi_set_options", 00:15:08.233 "params": { 00:15:08.233 "timeout_sec": 30 00:15:08.233 } 00:15:08.233 }, 00:15:08.233 { 00:15:08.233 "method": "bdev_nvme_set_options", 00:15:08.233 "params": { 00:15:08.233 "action_on_timeout": "none", 00:15:08.233 "timeout_us": 0, 00:15:08.233 "timeout_admin_us": 0, 00:15:08.233 "keep_alive_timeout_ms": 10000, 00:15:08.233 "arbitration_burst": 0, 00:15:08.233 "low_priority_weight": 0, 00:15:08.233 "medium_priority_weight": 0, 00:15:08.233 "high_priority_weight": 0, 00:15:08.233 "nvme_adminq_poll_period_us": 10000, 00:15:08.233 "nvme_ioq_poll_period_us": 0, 00:15:08.233 "io_queue_requests": 512, 00:15:08.233 "delay_cmd_submit": true, 00:15:08.233 "transport_retry_count": 4, 00:15:08.233 "bdev_retry_count": 3, 00:15:08.233 "transport_ack_timeout": 0, 00:15:08.233 "ctrlr_loss_timeout_sec": 0, 00:15:08.233 "reconnect_delay_sec": 0, 00:15:08.233 "fast_io_fail_timeout_sec": 0, 00:15:08.233 "disable_auto_failback": false, 00:15:08.233 "generate_uuids": false, 00:15:08.233 "transport_tos": 0, 00:15:08.233 "nvme_error_stat": false, 00:15:08.233 "rdma_srq_size": 0, 00:15:08.233 "io_path_stat": false, 00:15:08.233 "allow_accel_sequence": false, 00:15:08.233 "rdma_max_cq_size": 0, 00:15:08.233 "rdma_cm_event_timeout_ms": 0, 00:15:08.233 "dhchap_digests": [ 00:15:08.233 "sha256", 00:15:08.233 "sha384", 00:15:08.233 "sha512" 00:15:08.233 ], 00:15:08.233 "dhchap_dhgroups": [ 00:15:08.233 "null", 00:15:08.233 "ffdhe2048", 00:15:08.233 "ffdhe3072", 00:15:08.233 "ffdhe4096", 00:15:08.233 "ffdhe6144", 00:15:08.233 "ffdhe8192" 00:15:08.233 ] 00:15:08.233 } 00:15:08.233 }, 00:15:08.233 { 00:15:08.233 "method": "bdev_nvme_attach_controller", 00:15:08.233 "params": { 00:15:08.233 "name": "nvme0", 00:15:08.233 "trtype": "TCP", 00:15:08.233 "adrfam": "IPv4", 00:15:08.233 "traddr": "10.0.0.3", 00:15:08.233 "trsvcid": "4420", 00:15:08.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.233 "prchk_reftag": false, 00:15:08.233 "prchk_guard": false, 00:15:08.233 "ctrlr_loss_timeout_sec": 0, 00:15:08.233 "reconnect_delay_sec": 0, 00:15:08.233 "fast_io_fail_timeout_sec": 0, 00:15:08.233 "psk": "key0", 00:15:08.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.233 "hdgst": false, 00:15:08.233 "ddgst": false, 00:15:08.233 "multipath": "multipath" 00:15:08.233 } 00:15:08.233 }, 00:15:08.233 { 00:15:08.233 "method": "bdev_nvme_set_hotplug", 00:15:08.233 "params": { 00:15:08.233 "period_us": 100000, 00:15:08.233 "enable": false 00:15:08.233 } 00:15:08.233 }, 00:15:08.233 { 00:15:08.233 "method": "bdev_enable_histogram", 00:15:08.233 "params": { 00:15:08.233 "name": "nvme0n1", 00:15:08.233 "enable": true 00:15:08.233 } 00:15:08.233 }, 00:15:08.233 { 00:15:08.233 "method": "bdev_wait_for_examine" 00:15:08.233 } 00:15:08.233 ] 00:15:08.233 }, 00:15:08.233 { 00:15:08.233 "subsystem": "nbd", 00:15:08.233 "config": [] 00:15:08.233 } 00:15:08.233 ] 00:15:08.233 }' 00:15:08.233 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:08.233 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:08.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:08.233 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:08.233 07:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.233 [2024-11-08 07:42:26.049851] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:08.233 [2024-11-08 07:42:26.051177] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72204 ] 00:15:08.492 [2024-11-08 07:42:26.211073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.492 [2024-11-08 07:42:26.269617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.493 [2024-11-08 07:42:26.394202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.493 [2024-11-08 07:42:26.435738] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:09.061 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:09.061 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:09.061 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:09.061 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:09.320 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.320 07:42:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:09.579 Running I/O for 1 seconds... 00:15:10.544 5822.00 IOPS, 22.74 MiB/s 00:15:10.544 Latency(us) 00:15:10.544 [2024-11-08T07:42:28.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.544 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:10.544 Verification LBA range: start 0x0 length 0x2000 00:15:10.544 nvme0n1 : 1.01 5879.32 22.97 0.00 0.00 21617.83 4712.35 16602.45 00:15:10.544 [2024-11-08T07:42:28.505Z] =================================================================================================================== 00:15:10.544 [2024-11-08T07:42:28.505Z] Total : 5879.32 22.97 0.00 0.00 21617.83 4712.35 16602.45 00:15:10.544 { 00:15:10.544 "results": [ 00:15:10.544 { 00:15:10.544 "job": "nvme0n1", 00:15:10.544 "core_mask": "0x2", 00:15:10.544 "workload": "verify", 00:15:10.544 "status": "finished", 00:15:10.544 "verify_range": { 00:15:10.544 "start": 0, 00:15:10.544 "length": 8192 00:15:10.544 }, 00:15:10.544 "queue_depth": 128, 00:15:10.544 "io_size": 4096, 00:15:10.544 "runtime": 1.012021, 00:15:10.544 "iops": 5879.3246385203465, 00:15:10.544 "mibps": 22.966111869220104, 00:15:10.544 "io_failed": 0, 00:15:10.544 "io_timeout": 0, 00:15:10.544 "avg_latency_us": 21617.832412965185, 00:15:10.544 "min_latency_us": 4712.350476190476, 00:15:10.544 "max_latency_us": 16602.453333333335 00:15:10.544 } 00:15:10.544 ], 00:15:10.544 "core_count": 1 00:15:10.544 } 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:10.544 nvmf_trace.0 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72204 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72204 ']' 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72204 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:10.544 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72204 00:15:10.803 killing process with pid 72204 00:15:10.803 Received shutdown signal, test time was about 1.000000 seconds 00:15:10.803 00:15:10.803 Latency(us) 00:15:10.803 [2024-11-08T07:42:28.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.803 [2024-11-08T07:42:28.764Z] =================================================================================================================== 00:15:10.803 [2024-11-08T07:42:28.764Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.803 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:10.803 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:10.803 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72204' 00:15:10.803 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72204 00:15:10.803 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72204 00:15:10.803 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:10.803 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:10.803 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:10.803 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:10.803 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:10.803 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:10.803 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:10.803 rmmod nvme_tcp 00:15:10.803 rmmod nvme_fabrics 00:15:11.062 rmmod nvme_keyring 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72172 ']' 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72172 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72172 ']' 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72172 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72172 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72172' 00:15:11.062 killing process with pid 72172 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72172 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72172 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:15:11.062 07:42:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:15:11.062 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:11.062 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:11.062 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.8roE32jyJM /tmp/tmp.9q8k1FzLBY /tmp/tmp.Qkew0YUEwh 00:15:11.321 00:15:11.321 real 1m24.956s 00:15:11.321 user 2m13.217s 00:15:11.321 sys 0m29.607s 00:15:11.321 ************************************ 00:15:11.321 END TEST nvmf_tls 00:15:11.321 ************************************ 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:11.321 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:11.582 ************************************ 00:15:11.582 START TEST nvmf_fips 00:15:11.582 ************************************ 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:11.582 * Looking for test storage... 00:15:11.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:11.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.582 --rc genhtml_branch_coverage=1 00:15:11.582 --rc genhtml_function_coverage=1 00:15:11.582 --rc genhtml_legend=1 00:15:11.582 --rc geninfo_all_blocks=1 00:15:11.582 --rc geninfo_unexecuted_blocks=1 00:15:11.582 00:15:11.582 ' 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:11.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.582 --rc genhtml_branch_coverage=1 00:15:11.582 --rc genhtml_function_coverage=1 00:15:11.582 --rc genhtml_legend=1 00:15:11.582 --rc geninfo_all_blocks=1 00:15:11.582 --rc geninfo_unexecuted_blocks=1 00:15:11.582 00:15:11.582 ' 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:11.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.582 --rc genhtml_branch_coverage=1 00:15:11.582 --rc genhtml_function_coverage=1 00:15:11.582 --rc genhtml_legend=1 00:15:11.582 --rc geninfo_all_blocks=1 00:15:11.582 --rc geninfo_unexecuted_blocks=1 00:15:11.582 00:15:11.582 ' 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:11.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.582 --rc genhtml_branch_coverage=1 00:15:11.582 --rc genhtml_function_coverage=1 00:15:11.582 --rc genhtml_legend=1 00:15:11.582 --rc geninfo_all_blocks=1 00:15:11.582 --rc geninfo_unexecuted_blocks=1 00:15:11.582 00:15:11.582 ' 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.582 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:11.583 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.583 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.848 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:15:11.849 Error setting digest 00:15:11.849 4062B0F6787F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:11.849 4062B0F6787F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:11.849 Cannot find device "nvmf_init_br" 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:11.849 Cannot find device "nvmf_init_br2" 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:11.849 Cannot find device "nvmf_tgt_br" 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.849 Cannot find device "nvmf_tgt_br2" 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:11.849 Cannot find device "nvmf_init_br" 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:11.849 Cannot find device "nvmf_init_br2" 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:11.849 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:12.109 Cannot find device "nvmf_tgt_br" 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:12.109 Cannot find device "nvmf_tgt_br2" 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:12.109 Cannot find device "nvmf_br" 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:12.109 Cannot find device "nvmf_init_if" 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:12.109 Cannot find device "nvmf_init_if2" 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:12.109 07:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:12.109 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:12.109 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:12.109 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:12.109 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:12.109 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:12.109 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:12.109 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:12.110 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:12.110 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:12.110 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:12.110 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:12.110 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:12.369 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:12.369 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:15:12.369 00:15:12.369 --- 10.0.0.3 ping statistics --- 00:15:12.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.369 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:12.369 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:12.369 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:15:12.369 00:15:12.369 --- 10.0.0.4 ping statistics --- 00:15:12.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.369 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:12.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:12.369 00:15:12.369 --- 10.0.0.1 ping statistics --- 00:15:12.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.369 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:12.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:12.369 00:15:12.369 --- 10.0.0.2 ping statistics --- 00:15:12.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.369 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:12.369 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:12.370 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72519 00:15:12.370 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72519 00:15:12.370 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 72519 ']' 00:15:12.370 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.370 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:12.370 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.370 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:12.370 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:12.370 07:42:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:12.370 [2024-11-08 07:42:30.274706] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:12.370 [2024-11-08 07:42:30.274814] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.629 [2024-11-08 07:42:30.435607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.629 [2024-11-08 07:42:30.497970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.629 [2024-11-08 07:42:30.498046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.629 [2024-11-08 07:42:30.498062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.629 [2024-11-08 07:42:30.498075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.629 [2024-11-08 07:42:30.498087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.629 [2024-11-08 07:42:30.498450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.629 [2024-11-08 07:42:30.546423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.NqG 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.NqG 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.NqG 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.NqG 00:15:13.567 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.826 [2024-11-08 07:42:31.567196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.826 [2024-11-08 07:42:31.583138] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:13.826 [2024-11-08 07:42:31.583303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:13.826 malloc0 00:15:13.826 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:13.826 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72561 00:15:13.826 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:13.826 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72561 /var/tmp/bdevperf.sock 00:15:13.826 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 72561 ']' 00:15:13.826 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.826 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:13.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.826 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.826 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:13.826 07:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:13.826 [2024-11-08 07:42:31.732139] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:13.826 [2024-11-08 07:42:31.732235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72561 ] 00:15:14.085 [2024-11-08 07:42:31.882929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.085 [2024-11-08 07:42:31.929581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.085 [2024-11-08 07:42:31.970138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:14.653 07:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:14.653 07:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:15:14.653 07:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.NqG 00:15:14.912 07:42:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:15.172 [2024-11-08 07:42:33.094060] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:15.430 TLSTESTn1 00:15:15.430 07:42:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:15.430 Running I/O for 10 seconds... 00:15:17.745 5681.00 IOPS, 22.19 MiB/s [2024-11-08T07:42:36.273Z] 5717.50 IOPS, 22.33 MiB/s [2024-11-08T07:42:37.651Z] 5751.33 IOPS, 22.47 MiB/s [2024-11-08T07:42:38.587Z] 5778.50 IOPS, 22.57 MiB/s [2024-11-08T07:42:39.524Z] 5801.60 IOPS, 22.66 MiB/s [2024-11-08T07:42:40.467Z] 5812.33 IOPS, 22.70 MiB/s [2024-11-08T07:42:41.405Z] 5820.43 IOPS, 22.74 MiB/s [2024-11-08T07:42:42.343Z] 5818.62 IOPS, 22.73 MiB/s [2024-11-08T07:42:43.280Z] 5828.00 IOPS, 22.77 MiB/s [2024-11-08T07:42:43.540Z] 5834.70 IOPS, 22.79 MiB/s 00:15:25.579 Latency(us) 00:15:25.579 [2024-11-08T07:42:43.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.579 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:25.579 Verification LBA range: start 0x0 length 0x2000 00:15:25.579 TLSTESTn1 : 10.01 5840.45 22.81 0.00 0.00 21882.73 3963.37 18724.57 00:15:25.579 [2024-11-08T07:42:43.540Z] =================================================================================================================== 00:15:25.579 [2024-11-08T07:42:43.540Z] Total : 5840.45 22.81 0.00 0.00 21882.73 3963.37 18724.57 00:15:25.579 { 00:15:25.579 "results": [ 00:15:25.579 { 00:15:25.579 "job": "TLSTESTn1", 00:15:25.579 "core_mask": "0x4", 00:15:25.579 "workload": "verify", 00:15:25.579 "status": "finished", 00:15:25.579 "verify_range": { 00:15:25.579 "start": 0, 00:15:25.579 "length": 8192 00:15:25.579 }, 00:15:25.579 "queue_depth": 128, 00:15:25.579 "io_size": 4096, 00:15:25.579 "runtime": 10.011559, 00:15:25.579 "iops": 5840.449024972035, 00:15:25.579 "mibps": 22.81425400379701, 00:15:25.579 "io_failed": 0, 00:15:25.579 "io_timeout": 0, 00:15:25.579 "avg_latency_us": 21882.72982480829, 00:15:25.579 "min_latency_us": 3963.367619047619, 00:15:25.579 "max_latency_us": 18724.571428571428 00:15:25.579 } 00:15:25.579 ], 00:15:25.579 "core_count": 1 00:15:25.579 } 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:25.579 nvmf_trace.0 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72561 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 72561 ']' 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 72561 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72561 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:15:25.579 killing process with pid 72561 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72561' 00:15:25.579 Received shutdown signal, test time was about 10.000000 seconds 00:15:25.579 00:15:25.579 Latency(us) 00:15:25.579 [2024-11-08T07:42:43.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.579 [2024-11-08T07:42:43.540Z] =================================================================================================================== 00:15:25.579 [2024-11-08T07:42:43.540Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 72561 00:15:25.579 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 72561 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:25.839 rmmod nvme_tcp 00:15:25.839 rmmod nvme_fabrics 00:15:25.839 rmmod nvme_keyring 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72519 ']' 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72519 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 72519 ']' 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 72519 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72519 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:25.839 killing process with pid 72519 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72519' 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 72519 00:15:25.839 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 72519 00:15:26.099 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:26.099 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:26.099 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:26.099 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:26.099 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:15:26.099 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:26.099 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:15:26.099 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:26.099 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:26.099 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:26.099 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:26.099 07:42:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:26.099 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:26.099 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:26.099 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:26.099 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:26.099 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:26.099 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:26.358 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:26.358 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:26.358 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:26.358 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:26.358 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:26.358 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.358 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.358 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.359 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:26.359 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.NqG 00:15:26.359 00:15:26.359 real 0m14.926s 00:15:26.359 user 0m20.082s 00:15:26.359 sys 0m6.128s 00:15:26.359 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:26.359 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:26.359 ************************************ 00:15:26.359 END TEST nvmf_fips 00:15:26.359 ************************************ 00:15:26.359 07:42:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:26.359 07:42:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:26.359 07:42:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:26.359 07:42:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:26.359 ************************************ 00:15:26.359 START TEST nvmf_control_msg_list 00:15:26.359 ************************************ 00:15:26.359 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:26.619 * Looking for test storage... 00:15:26.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:26.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.619 --rc genhtml_branch_coverage=1 00:15:26.619 --rc genhtml_function_coverage=1 00:15:26.619 --rc genhtml_legend=1 00:15:26.619 --rc geninfo_all_blocks=1 00:15:26.619 --rc geninfo_unexecuted_blocks=1 00:15:26.619 00:15:26.619 ' 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:26.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.619 --rc genhtml_branch_coverage=1 00:15:26.619 --rc genhtml_function_coverage=1 00:15:26.619 --rc genhtml_legend=1 00:15:26.619 --rc geninfo_all_blocks=1 00:15:26.619 --rc geninfo_unexecuted_blocks=1 00:15:26.619 00:15:26.619 ' 00:15:26.619 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:26.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.619 --rc genhtml_branch_coverage=1 00:15:26.619 --rc genhtml_function_coverage=1 00:15:26.619 --rc genhtml_legend=1 00:15:26.619 --rc geninfo_all_blocks=1 00:15:26.620 --rc geninfo_unexecuted_blocks=1 00:15:26.620 00:15:26.620 ' 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:26.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.620 --rc genhtml_branch_coverage=1 00:15:26.620 --rc genhtml_function_coverage=1 00:15:26.620 --rc genhtml_legend=1 00:15:26.620 --rc geninfo_all_blocks=1 00:15:26.620 --rc geninfo_unexecuted_blocks=1 00:15:26.620 00:15:26.620 ' 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:26.620 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:26.620 Cannot find device "nvmf_init_br" 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:26.620 Cannot find device "nvmf_init_br2" 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:26.620 Cannot find device "nvmf_tgt_br" 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:26.620 Cannot find device "nvmf_tgt_br2" 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:26.620 Cannot find device "nvmf_init_br" 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:26.620 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:26.620 Cannot find device "nvmf_init_br2" 00:15:26.621 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:26.621 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:26.880 Cannot find device "nvmf_tgt_br" 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:26.880 Cannot find device "nvmf_tgt_br2" 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:26.880 Cannot find device "nvmf_br" 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:26.880 Cannot find device "nvmf_init_if" 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:26.880 Cannot find device "nvmf_init_if2" 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:26.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:26.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:26.880 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:26.881 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:27.140 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:27.140 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:15:27.140 00:15:27.140 --- 10.0.0.3 ping statistics --- 00:15:27.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.140 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:27.140 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:27.140 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:15:27.140 00:15:27.140 --- 10.0.0.4 ping statistics --- 00:15:27.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.140 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:27.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:15:27.140 00:15:27.140 --- 10.0.0.1 ping statistics --- 00:15:27.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.140 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:27.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:27.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:15:27.140 00:15:27.140 --- 10.0.0.2 ping statistics --- 00:15:27.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.140 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.140 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=72954 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 72954 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 72954 ']' 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:27.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:27.141 07:42:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:27.141 [2024-11-08 07:42:44.991022] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:27.141 [2024-11-08 07:42:44.991693] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.406 [2024-11-08 07:42:45.151671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.406 [2024-11-08 07:42:45.207472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.406 [2024-11-08 07:42:45.207532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.406 [2024-11-08 07:42:45.207548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.406 [2024-11-08 07:42:45.207561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.406 [2024-11-08 07:42:45.207572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.406 [2024-11-08 07:42:45.207950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.406 [2024-11-08 07:42:45.255906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:28.012 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:28.012 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:15:28.012 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:28.012 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:28.012 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:28.013 [2024-11-08 07:42:45.928740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:28.013 Malloc0 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:28.013 [2024-11-08 07:42:45.965525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:28.013 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.272 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=72986 00:15:28.272 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:28.272 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=72987 00:15:28.272 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:28.272 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=72988 00:15:28.272 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 72986 00:15:28.272 07:42:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:28.272 [2024-11-08 07:42:46.154142] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:28.272 [2024-11-08 07:42:46.154308] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:28.272 [2024-11-08 07:42:46.154430] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:29.655 Initializing NVMe Controllers 00:15:29.655 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:29.655 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:29.655 Initialization complete. Launching workers. 00:15:29.655 ======================================================== 00:15:29.655 Latency(us) 00:15:29.655 Device Information : IOPS MiB/s Average min max 00:15:29.655 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4755.00 18.57 210.02 175.65 1760.30 00:15:29.655 ======================================================== 00:15:29.655 Total : 4755.00 18.57 210.02 175.65 1760.30 00:15:29.655 00:15:29.655 Initializing NVMe Controllers 00:15:29.655 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:29.655 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:29.655 Initialization complete. Launching workers. 00:15:29.655 ======================================================== 00:15:29.655 Latency(us) 00:15:29.655 Device Information : IOPS MiB/s Average min max 00:15:29.655 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4768.97 18.63 209.48 120.75 279.16 00:15:29.655 ======================================================== 00:15:29.655 Total : 4768.97 18.63 209.48 120.75 279.16 00:15:29.655 00:15:29.655 Initializing NVMe Controllers 00:15:29.655 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:29.655 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:29.655 Initialization complete. Launching workers. 00:15:29.655 ======================================================== 00:15:29.655 Latency(us) 00:15:29.655 Device Information : IOPS MiB/s Average min max 00:15:29.655 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4761.00 18.60 209.77 151.34 420.02 00:15:29.655 ======================================================== 00:15:29.655 Total : 4761.00 18.60 209.77 151.34 420.02 00:15:29.655 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 72987 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 72988 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:29.655 rmmod nvme_tcp 00:15:29.655 rmmod nvme_fabrics 00:15:29.655 rmmod nvme_keyring 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 72954 ']' 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 72954 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 72954 ']' 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 72954 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72954 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:29.655 killing process with pid 72954 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72954' 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 72954 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 72954 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:29.655 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:29.656 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:29.656 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:29.656 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:29.656 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:29.656 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:29.656 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:29.656 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:29.656 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:29.656 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:29.915 00:15:29.915 real 0m3.476s 00:15:29.915 user 0m5.163s 00:15:29.915 sys 0m1.678s 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:29.915 ************************************ 00:15:29.915 END TEST nvmf_control_msg_list 00:15:29.915 ************************************ 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:29.915 ************************************ 00:15:29.915 START TEST nvmf_wait_for_buf 00:15:29.915 ************************************ 00:15:29.915 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:30.175 * Looking for test storage... 00:15:30.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:30.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.175 --rc genhtml_branch_coverage=1 00:15:30.175 --rc genhtml_function_coverage=1 00:15:30.175 --rc genhtml_legend=1 00:15:30.175 --rc geninfo_all_blocks=1 00:15:30.175 --rc geninfo_unexecuted_blocks=1 00:15:30.175 00:15:30.175 ' 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:30.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.175 --rc genhtml_branch_coverage=1 00:15:30.175 --rc genhtml_function_coverage=1 00:15:30.175 --rc genhtml_legend=1 00:15:30.175 --rc geninfo_all_blocks=1 00:15:30.175 --rc geninfo_unexecuted_blocks=1 00:15:30.175 00:15:30.175 ' 00:15:30.175 07:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:30.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.175 --rc genhtml_branch_coverage=1 00:15:30.175 --rc genhtml_function_coverage=1 00:15:30.175 --rc genhtml_legend=1 00:15:30.175 --rc geninfo_all_blocks=1 00:15:30.175 --rc geninfo_unexecuted_blocks=1 00:15:30.175 00:15:30.175 ' 00:15:30.175 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:30.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.175 --rc genhtml_branch_coverage=1 00:15:30.175 --rc genhtml_function_coverage=1 00:15:30.175 --rc genhtml_legend=1 00:15:30.175 --rc geninfo_all_blocks=1 00:15:30.175 --rc geninfo_unexecuted_blocks=1 00:15:30.175 00:15:30.175 ' 00:15:30.175 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.175 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:30.175 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.175 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:30.176 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:30.176 Cannot find device "nvmf_init_br" 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:30.176 Cannot find device "nvmf_init_br2" 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:30.176 Cannot find device "nvmf_tgt_br" 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:30.176 Cannot find device "nvmf_tgt_br2" 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:30.176 Cannot find device "nvmf_init_br" 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:30.176 Cannot find device "nvmf_init_br2" 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:30.176 Cannot find device "nvmf_tgt_br" 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:30.176 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:30.436 Cannot find device "nvmf_tgt_br2" 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:30.436 Cannot find device "nvmf_br" 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:30.436 Cannot find device "nvmf_init_if" 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:30.436 Cannot find device "nvmf_init_if2" 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:30.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:30.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:30.436 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:30.695 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:30.695 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:30.695 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:30.695 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:30.695 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:30.695 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:30.695 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:15:30.695 00:15:30.695 --- 10.0.0.3 ping statistics --- 00:15:30.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.695 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:30.695 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:30.695 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:30.695 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:15:30.695 00:15:30.695 --- 10.0.0.4 ping statistics --- 00:15:30.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.695 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:30.695 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:30.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:15:30.695 00:15:30.695 --- 10.0.0.1 ping statistics --- 00:15:30.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.695 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:15:30.695 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:30.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:15:30.695 00:15:30.695 --- 10.0.0.2 ping statistics --- 00:15:30.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.695 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:30.695 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73222 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73222 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 73222 ']' 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:30.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:30.696 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:30.696 [2024-11-08 07:42:48.524044] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:30.696 [2024-11-08 07:42:48.524137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.954 [2024-11-08 07:42:48.675308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.954 [2024-11-08 07:42:48.716693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.954 [2024-11-08 07:42:48.716741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.955 [2024-11-08 07:42:48.716750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.955 [2024-11-08 07:42:48.716758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.955 [2024-11-08 07:42:48.716765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.955 [2024-11-08 07:42:48.717085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:30.955 [2024-11-08 07:42:48.859595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:30.955 Malloc0 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.955 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:31.214 [2024-11-08 07:42:48.915360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.214 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.214 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:31.214 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.214 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:31.214 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.214 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:31.214 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.214 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:31.214 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.214 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:31.214 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.214 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:31.214 [2024-11-08 07:42:48.939450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:31.214 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.214 07:42:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:31.214 [2024-11-08 07:42:49.130115] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:32.603 Initializing NVMe Controllers 00:15:32.603 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:32.603 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:32.603 Initialization complete. Launching workers. 00:15:32.603 ======================================================== 00:15:32.603 Latency(us) 00:15:32.603 Device Information : IOPS MiB/s Average min max 00:15:32.603 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 504.00 63.00 7992.32 7044.96 8043.79 00:15:32.603 ======================================================== 00:15:32.603 Total : 504.00 63.00 7992.32 7044.96 8043.79 00:15:32.603 00:15:32.603 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:32.603 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.603 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:32.603 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:32.603 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.603 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:15:32.603 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:15:32.603 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:32.603 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:32.603 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:32.603 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:32.603 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:32.603 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:32.604 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:32.604 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:32.604 rmmod nvme_tcp 00:15:32.604 rmmod nvme_fabrics 00:15:32.604 rmmod nvme_keyring 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73222 ']' 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73222 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 73222 ']' 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 73222 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73222 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:32.863 killing process with pid 73222 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73222' 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 73222 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 73222 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:32.863 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.122 07:42:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.122 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:33.122 00:15:33.122 real 0m3.229s 00:15:33.122 user 0m2.456s 00:15:33.122 sys 0m0.962s 00:15:33.122 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:33.122 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:33.122 ************************************ 00:15:33.122 END TEST nvmf_wait_for_buf 00:15:33.122 ************************************ 00:15:33.122 07:42:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:15:33.122 07:42:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.382 ************************************ 00:15:33.382 START TEST nvmf_nsid 00:15:33.382 ************************************ 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:15:33.382 * Looking for test storage... 00:15:33.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.382 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:33.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.383 --rc genhtml_branch_coverage=1 00:15:33.383 --rc genhtml_function_coverage=1 00:15:33.383 --rc genhtml_legend=1 00:15:33.383 --rc geninfo_all_blocks=1 00:15:33.383 --rc geninfo_unexecuted_blocks=1 00:15:33.383 00:15:33.383 ' 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:33.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.383 --rc genhtml_branch_coverage=1 00:15:33.383 --rc genhtml_function_coverage=1 00:15:33.383 --rc genhtml_legend=1 00:15:33.383 --rc geninfo_all_blocks=1 00:15:33.383 --rc geninfo_unexecuted_blocks=1 00:15:33.383 00:15:33.383 ' 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:33.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.383 --rc genhtml_branch_coverage=1 00:15:33.383 --rc genhtml_function_coverage=1 00:15:33.383 --rc genhtml_legend=1 00:15:33.383 --rc geninfo_all_blocks=1 00:15:33.383 --rc geninfo_unexecuted_blocks=1 00:15:33.383 00:15:33.383 ' 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:33.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.383 --rc genhtml_branch_coverage=1 00:15:33.383 --rc genhtml_function_coverage=1 00:15:33.383 --rc genhtml_legend=1 00:15:33.383 --rc geninfo_all_blocks=1 00:15:33.383 --rc geninfo_unexecuted_blocks=1 00:15:33.383 00:15:33.383 ' 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:33.383 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.383 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:33.384 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:33.384 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:33.384 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:33.384 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:33.384 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:33.384 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.384 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:33.384 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:33.384 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:33.384 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:33.384 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:33.384 Cannot find device "nvmf_init_br" 00:15:33.384 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:15:33.384 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:33.643 Cannot find device "nvmf_init_br2" 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:33.643 Cannot find device "nvmf_tgt_br" 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.643 Cannot find device "nvmf_tgt_br2" 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:33.643 Cannot find device "nvmf_init_br" 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:33.643 Cannot find device "nvmf_init_br2" 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:33.643 Cannot find device "nvmf_tgt_br" 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:33.643 Cannot find device "nvmf_tgt_br2" 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:33.643 Cannot find device "nvmf_br" 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:33.643 Cannot find device "nvmf_init_if" 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:33.643 Cannot find device "nvmf_init_if2" 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:33.643 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:33.903 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:33.903 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:15:33.903 00:15:33.903 --- 10.0.0.3 ping statistics --- 00:15:33.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.903 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:33.903 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:33.903 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:15:33.903 00:15:33.903 --- 10.0.0.4 ping statistics --- 00:15:33.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.903 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:33.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:33.903 00:15:33.903 --- 10.0.0.1 ping statistics --- 00:15:33.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.903 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:33.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:15:33.903 00:15:33.903 --- 10.0.0.2 ping statistics --- 00:15:33.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.903 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73479 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73479 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 73479 ']' 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.903 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:33.904 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.904 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:15:33.904 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:33.904 07:42:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:33.904 [2024-11-08 07:42:51.827608] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:33.904 [2024-11-08 07:42:51.828246] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.163 [2024-11-08 07:42:51.976734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.164 [2024-11-08 07:42:52.024764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.164 [2024-11-08 07:42:52.024807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.164 [2024-11-08 07:42:52.024817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.164 [2024-11-08 07:42:52.024826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.164 [2024-11-08 07:42:52.024833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.164 [2024-11-08 07:42:52.025111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.164 [2024-11-08 07:42:52.066010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73504 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:34.423 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=4e1268c6-8fd9-4451-ab9f-b66703a110a6 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=5d1ac315-b7fa-40c7-bf37-233385a62f1d 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=4878cf1f-00bf-4919-9ab8-42bd7fb330ba 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:34.424 null0 00:15:34.424 null1 00:15:34.424 null2 00:15:34.424 [2024-11-08 07:42:52.237827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.424 [2024-11-08 07:42:52.240457] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:34.424 [2024-11-08 07:42:52.240542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73504 ] 00:15:34.424 [2024-11-08 07:42:52.261942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73504 /var/tmp/tgt2.sock 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 73504 ']' 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:34.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:34.424 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:34.683 [2024-11-08 07:42:52.389217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.683 [2024-11-08 07:42:52.437138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.683 [2024-11-08 07:42:52.492034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.942 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:34.942 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:15:34.942 07:42:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:15:35.201 [2024-11-08 07:42:53.063530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.201 [2024-11-08 07:42:53.079629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:15:35.201 nvme0n1 nvme0n2 00:15:35.201 nvme1n1 00:15:35.201 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:15:35.201 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:15:35.201 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:35.460 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:15:35.460 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:15:35.460 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:15:35.460 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:15:35.460 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:15:35.460 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:15:35.460 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:15:35.460 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:15:35.460 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:35.460 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:15:35.460 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:15:35.460 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:15:35.460 07:42:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:15:36.396 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:36.396 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:15:36.396 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:36.396 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:15:36.397 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:15:36.397 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 4e1268c6-8fd9-4451-ab9f-b66703a110a6 00:15:36.397 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:36.397 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:15:36.397 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:15:36.397 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:15:36.397 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4e1268c68fd94451ab9fb66703a110a6 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4E1268C68FD94451AB9FB66703A110A6 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 4E1268C68FD94451AB9FB66703A110A6 == \4\E\1\2\6\8\C\6\8\F\D\9\4\4\5\1\A\B\9\F\B\6\6\7\0\3\A\1\1\0\A\6 ]] 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 5d1ac315-b7fa-40c7-bf37-233385a62f1d 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5d1ac315b7fa40c7bf37233385a62f1d 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5D1AC315B7FA40C7BF37233385A62F1D 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 5D1AC315B7FA40C7BF37233385A62F1D == \5\D\1\A\C\3\1\5\B\7\F\A\4\0\C\7\B\F\3\7\2\3\3\3\8\5\A\6\2\F\1\D ]] 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 4878cf1f-00bf-4919-9ab8-42bd7fb330ba 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4878cf1f00bf49199ab842bd7fb330ba 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4878CF1F00BF49199AB842BD7FB330BA 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 4878CF1F00BF49199AB842BD7FB330BA == \4\8\7\8\C\F\1\F\0\0\B\F\4\9\1\9\9\A\B\8\4\2\B\D\7\F\B\3\3\0\B\A ]] 00:15:36.656 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:15:36.916 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:15:36.916 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:15:36.916 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73504 00:15:36.916 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 73504 ']' 00:15:36.916 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 73504 00:15:36.916 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:15:36.916 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:36.916 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73504 00:15:36.916 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:36.916 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:36.916 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73504' 00:15:36.916 killing process with pid 73504 00:15:36.916 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 73504 00:15:36.916 07:42:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 73504 00:15:37.175 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:15:37.175 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:37.175 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:15:37.175 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:37.175 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:15:37.175 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:37.175 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:37.175 rmmod nvme_tcp 00:15:37.175 rmmod nvme_fabrics 00:15:37.435 rmmod nvme_keyring 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73479 ']' 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73479 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 73479 ']' 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 73479 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73479 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:37.435 killing process with pid 73479 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73479' 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 73479 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 73479 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:37.435 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:15:37.694 00:15:37.694 real 0m4.519s 00:15:37.694 user 0m6.391s 00:15:37.694 sys 0m2.001s 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:37.694 ************************************ 00:15:37.694 END TEST nvmf_nsid 00:15:37.694 ************************************ 00:15:37.694 07:42:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:37.953 ************************************ 00:15:37.953 END TEST nvmf_target_extra 00:15:37.953 ************************************ 00:15:37.953 00:15:37.953 real 4m59.200s 00:15:37.953 user 10m6.336s 00:15:37.953 sys 1m21.069s 00:15:37.953 07:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:37.953 07:42:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.953 07:42:55 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:37.953 07:42:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:37.953 07:42:55 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:37.953 07:42:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:37.953 ************************************ 00:15:37.953 START TEST nvmf_host 00:15:37.953 ************************************ 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:37.953 * Looking for test storage... 00:15:37.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:15:37.953 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:37.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.954 --rc genhtml_branch_coverage=1 00:15:37.954 --rc genhtml_function_coverage=1 00:15:37.954 --rc genhtml_legend=1 00:15:37.954 --rc geninfo_all_blocks=1 00:15:37.954 --rc geninfo_unexecuted_blocks=1 00:15:37.954 00:15:37.954 ' 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:37.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.954 --rc genhtml_branch_coverage=1 00:15:37.954 --rc genhtml_function_coverage=1 00:15:37.954 --rc genhtml_legend=1 00:15:37.954 --rc geninfo_all_blocks=1 00:15:37.954 --rc geninfo_unexecuted_blocks=1 00:15:37.954 00:15:37.954 ' 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:37.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.954 --rc genhtml_branch_coverage=1 00:15:37.954 --rc genhtml_function_coverage=1 00:15:37.954 --rc genhtml_legend=1 00:15:37.954 --rc geninfo_all_blocks=1 00:15:37.954 --rc geninfo_unexecuted_blocks=1 00:15:37.954 00:15:37.954 ' 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:37.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.954 --rc genhtml_branch_coverage=1 00:15:37.954 --rc genhtml_function_coverage=1 00:15:37.954 --rc genhtml_legend=1 00:15:37.954 --rc geninfo_all_blocks=1 00:15:37.954 --rc geninfo_unexecuted_blocks=1 00:15:37.954 00:15:37.954 ' 00:15:37.954 07:42:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:38.216 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.216 ************************************ 00:15:38.216 START TEST nvmf_identify 00:15:38.216 ************************************ 00:15:38.216 07:42:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:38.216 * Looking for test storage... 00:15:38.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:38.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.216 --rc genhtml_branch_coverage=1 00:15:38.216 --rc genhtml_function_coverage=1 00:15:38.216 --rc genhtml_legend=1 00:15:38.216 --rc geninfo_all_blocks=1 00:15:38.216 --rc geninfo_unexecuted_blocks=1 00:15:38.216 00:15:38.216 ' 00:15:38.216 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:38.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.216 --rc genhtml_branch_coverage=1 00:15:38.216 --rc genhtml_function_coverage=1 00:15:38.216 --rc genhtml_legend=1 00:15:38.216 --rc geninfo_all_blocks=1 00:15:38.216 --rc geninfo_unexecuted_blocks=1 00:15:38.217 00:15:38.217 ' 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:38.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.217 --rc genhtml_branch_coverage=1 00:15:38.217 --rc genhtml_function_coverage=1 00:15:38.217 --rc genhtml_legend=1 00:15:38.217 --rc geninfo_all_blocks=1 00:15:38.217 --rc geninfo_unexecuted_blocks=1 00:15:38.217 00:15:38.217 ' 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:38.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.217 --rc genhtml_branch_coverage=1 00:15:38.217 --rc genhtml_function_coverage=1 00:15:38.217 --rc genhtml_legend=1 00:15:38.217 --rc geninfo_all_blocks=1 00:15:38.217 --rc geninfo_unexecuted_blocks=1 00:15:38.217 00:15:38.217 ' 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:38.217 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:38.217 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:38.476 Cannot find device "nvmf_init_br" 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:38.476 Cannot find device "nvmf_init_br2" 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:38.476 Cannot find device "nvmf_tgt_br" 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:38.476 Cannot find device "nvmf_tgt_br2" 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:38.476 Cannot find device "nvmf_init_br" 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:38.476 Cannot find device "nvmf_init_br2" 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:38.476 Cannot find device "nvmf_tgt_br" 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:38.476 Cannot find device "nvmf_tgt_br2" 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:38.476 Cannot find device "nvmf_br" 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:38.476 Cannot find device "nvmf_init_if" 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:38.476 Cannot find device "nvmf_init_if2" 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:38.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:38.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:38.476 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:38.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:38.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:15:38.736 00:15:38.736 --- 10.0.0.3 ping statistics --- 00:15:38.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.736 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:38.736 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:38.736 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:15:38.736 00:15:38.736 --- 10.0.0.4 ping statistics --- 00:15:38.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.736 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:38.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:38.736 00:15:38.736 --- 10.0.0.1 ping statistics --- 00:15:38.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.736 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:38.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:15:38.736 00:15:38.736 --- 10.0.0.2 ping statistics --- 00:15:38.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.736 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73858 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73858 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 73858 ']' 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:38.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:38.736 07:42:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:38.995 [2024-11-08 07:42:56.724492] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:38.995 [2024-11-08 07:42:56.724584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.995 [2024-11-08 07:42:56.877076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.995 [2024-11-08 07:42:56.927167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.995 [2024-11-08 07:42:56.927218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.995 [2024-11-08 07:42:56.927228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.995 [2024-11-08 07:42:56.927236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.995 [2024-11-08 07:42:56.927243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.995 [2024-11-08 07:42:56.928139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.995 [2024-11-08 07:42:56.928327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.995 [2024-11-08 07:42:56.928386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.995 [2024-11-08 07:42:56.928391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.254 [2024-11-08 07:42:56.969326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.822 [2024-11-08 07:42:57.642649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.822 Malloc0 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.822 [2024-11-08 07:42:57.762111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.822 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.084 [ 00:15:40.084 { 00:15:40.084 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:40.084 "subtype": "Discovery", 00:15:40.084 "listen_addresses": [ 00:15:40.084 { 00:15:40.084 "trtype": "TCP", 00:15:40.084 "adrfam": "IPv4", 00:15:40.084 "traddr": "10.0.0.3", 00:15:40.084 "trsvcid": "4420" 00:15:40.084 } 00:15:40.084 ], 00:15:40.084 "allow_any_host": true, 00:15:40.084 "hosts": [] 00:15:40.084 }, 00:15:40.084 { 00:15:40.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.084 "subtype": "NVMe", 00:15:40.084 "listen_addresses": [ 00:15:40.084 { 00:15:40.084 "trtype": "TCP", 00:15:40.084 "adrfam": "IPv4", 00:15:40.084 "traddr": "10.0.0.3", 00:15:40.084 "trsvcid": "4420" 00:15:40.084 } 00:15:40.084 ], 00:15:40.084 "allow_any_host": true, 00:15:40.084 "hosts": [], 00:15:40.084 "serial_number": "SPDK00000000000001", 00:15:40.084 "model_number": "SPDK bdev Controller", 00:15:40.084 "max_namespaces": 32, 00:15:40.084 "min_cntlid": 1, 00:15:40.084 "max_cntlid": 65519, 00:15:40.084 "namespaces": [ 00:15:40.084 { 00:15:40.084 "nsid": 1, 00:15:40.084 "bdev_name": "Malloc0", 00:15:40.084 "name": "Malloc0", 00:15:40.084 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:40.084 "eui64": "ABCDEF0123456789", 00:15:40.084 "uuid": "2432149b-9a20-4ea9-aa84-d8fbf79715a5" 00:15:40.084 } 00:15:40.084 ] 00:15:40.084 } 00:15:40.084 ] 00:15:40.084 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.084 07:42:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:40.084 [2024-11-08 07:42:57.829534] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:40.085 [2024-11-08 07:42:57.829589] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73893 ] 00:15:40.085 [2024-11-08 07:42:57.982788] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:15:40.085 [2024-11-08 07:42:57.982837] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:40.085 [2024-11-08 07:42:57.982843] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:40.085 [2024-11-08 07:42:57.982854] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:40.085 [2024-11-08 07:42:57.982862] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:40.085 [2024-11-08 07:42:57.987152] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:15:40.085 [2024-11-08 07:42:57.987220] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1498750 0 00:15:40.085 [2024-11-08 07:42:57.987284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:40.085 [2024-11-08 07:42:57.987292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:40.085 [2024-11-08 07:42:57.987297] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:40.085 [2024-11-08 07:42:57.987301] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:40.085 [2024-11-08 07:42:57.987328] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.987334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.987338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1498750) 00:15:40.085 [2024-11-08 07:42:57.987351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:40.085 [2024-11-08 07:42:57.987370] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fc740, cid 0, qid 0 00:15:40.085 [2024-11-08 07:42:57.994996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.085 [2024-11-08 07:42:57.995012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.085 [2024-11-08 07:42:57.995017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fc740) on tqpair=0x1498750 00:15:40.085 [2024-11-08 07:42:57.995034] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:40.085 [2024-11-08 07:42:57.995042] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:15:40.085 [2024-11-08 07:42:57.995048] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:15:40.085 [2024-11-08 07:42:57.995061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1498750) 00:15:40.085 [2024-11-08 07:42:57.995078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.085 [2024-11-08 07:42:57.995103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fc740, cid 0, qid 0 00:15:40.085 [2024-11-08 07:42:57.995151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.085 [2024-11-08 07:42:57.995157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.085 [2024-11-08 07:42:57.995161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fc740) on tqpair=0x1498750 00:15:40.085 [2024-11-08 07:42:57.995171] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:15:40.085 [2024-11-08 07:42:57.995178] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:15:40.085 [2024-11-08 07:42:57.995185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1498750) 00:15:40.085 [2024-11-08 07:42:57.995199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.085 [2024-11-08 07:42:57.995212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fc740, cid 0, qid 0 00:15:40.085 [2024-11-08 07:42:57.995249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.085 [2024-11-08 07:42:57.995255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.085 [2024-11-08 07:42:57.995259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fc740) on tqpair=0x1498750 00:15:40.085 [2024-11-08 07:42:57.995268] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:15:40.085 [2024-11-08 07:42:57.995277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:40.085 [2024-11-08 07:42:57.995283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1498750) 00:15:40.085 [2024-11-08 07:42:57.995297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.085 [2024-11-08 07:42:57.995310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fc740, cid 0, qid 0 00:15:40.085 [2024-11-08 07:42:57.995342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.085 [2024-11-08 07:42:57.995348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.085 [2024-11-08 07:42:57.995351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fc740) on tqpair=0x1498750 00:15:40.085 [2024-11-08 07:42:57.995360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:40.085 [2024-11-08 07:42:57.995370] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1498750) 00:15:40.085 [2024-11-08 07:42:57.995384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.085 [2024-11-08 07:42:57.995397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fc740, cid 0, qid 0 00:15:40.085 [2024-11-08 07:42:57.995434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.085 [2024-11-08 07:42:57.995440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.085 [2024-11-08 07:42:57.995444] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fc740) on tqpair=0x1498750 00:15:40.085 [2024-11-08 07:42:57.995452] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:40.085 [2024-11-08 07:42:57.995458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:40.085 [2024-11-08 07:42:57.995465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:40.085 [2024-11-08 07:42:57.995575] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:15:40.085 [2024-11-08 07:42:57.995580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:40.085 [2024-11-08 07:42:57.995589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1498750) 00:15:40.085 [2024-11-08 07:42:57.995603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.085 [2024-11-08 07:42:57.995616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fc740, cid 0, qid 0 00:15:40.085 [2024-11-08 07:42:57.995651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.085 [2024-11-08 07:42:57.995656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.085 [2024-11-08 07:42:57.995660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fc740) on tqpair=0x1498750 00:15:40.085 [2024-11-08 07:42:57.995669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:40.085 [2024-11-08 07:42:57.995677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1498750) 00:15:40.085 [2024-11-08 07:42:57.995691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.085 [2024-11-08 07:42:57.995704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fc740, cid 0, qid 0 00:15:40.085 [2024-11-08 07:42:57.995736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.085 [2024-11-08 07:42:57.995742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.085 [2024-11-08 07:42:57.995745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fc740) on tqpair=0x1498750 00:15:40.085 [2024-11-08 07:42:57.995754] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:40.085 [2024-11-08 07:42:57.995759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:40.085 [2024-11-08 07:42:57.995767] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:15:40.085 [2024-11-08 07:42:57.995780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:40.085 [2024-11-08 07:42:57.995789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.085 [2024-11-08 07:42:57.995793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1498750) 00:15:40.085 [2024-11-08 07:42:57.995799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.085 [2024-11-08 07:42:57.995813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fc740, cid 0, qid 0 00:15:40.085 [2024-11-08 07:42:57.995879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.085 [2024-11-08 07:42:57.995885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.086 [2024-11-08 07:42:57.995889] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.995893] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1498750): datao=0, datal=4096, cccid=0 00:15:40.086 [2024-11-08 07:42:57.995898] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14fc740) on tqpair(0x1498750): expected_datao=0, payload_size=4096 00:15:40.086 [2024-11-08 07:42:57.995903] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.995911] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.995915] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.995924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.086 [2024-11-08 07:42:57.995929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.086 [2024-11-08 07:42:57.995933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.995937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fc740) on tqpair=0x1498750 00:15:40.086 [2024-11-08 07:42:57.995945] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:15:40.086 [2024-11-08 07:42:57.995950] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:15:40.086 [2024-11-08 07:42:57.995955] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:15:40.086 [2024-11-08 07:42:57.995960] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:15:40.086 [2024-11-08 07:42:57.995965] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:15:40.086 [2024-11-08 07:42:57.995970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:15:40.086 [2024-11-08 07:42:57.995991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:40.086 [2024-11-08 07:42:57.995999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1498750) 00:15:40.086 [2024-11-08 07:42:57.996013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:40.086 [2024-11-08 07:42:57.996027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fc740, cid 0, qid 0 00:15:40.086 [2024-11-08 07:42:57.996068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.086 [2024-11-08 07:42:57.996073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.086 [2024-11-08 07:42:57.996077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fc740) on tqpair=0x1498750 00:15:40.086 [2024-11-08 07:42:57.996089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1498750) 00:15:40.086 [2024-11-08 07:42:57.996102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.086 [2024-11-08 07:42:57.996108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1498750) 00:15:40.086 [2024-11-08 07:42:57.996121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.086 [2024-11-08 07:42:57.996127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1498750) 00:15:40.086 [2024-11-08 07:42:57.996140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.086 [2024-11-08 07:42:57.996146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.086 [2024-11-08 07:42:57.996159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.086 [2024-11-08 07:42:57.996164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:40.086 [2024-11-08 07:42:57.996175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:40.086 [2024-11-08 07:42:57.996182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1498750) 00:15:40.086 [2024-11-08 07:42:57.996192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.086 [2024-11-08 07:42:57.996206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fc740, cid 0, qid 0 00:15:40.086 [2024-11-08 07:42:57.996211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fc8c0, cid 1, qid 0 00:15:40.086 [2024-11-08 07:42:57.996216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fca40, cid 2, qid 0 00:15:40.086 [2024-11-08 07:42:57.996221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.086 [2024-11-08 07:42:57.996225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcd40, cid 4, qid 0 00:15:40.086 [2024-11-08 07:42:57.996292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.086 [2024-11-08 07:42:57.996298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.086 [2024-11-08 07:42:57.996302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcd40) on tqpair=0x1498750 00:15:40.086 [2024-11-08 07:42:57.996311] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:15:40.086 [2024-11-08 07:42:57.996316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:15:40.086 [2024-11-08 07:42:57.996325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1498750) 00:15:40.086 [2024-11-08 07:42:57.996335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.086 [2024-11-08 07:42:57.996348] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcd40, cid 4, qid 0 00:15:40.086 [2024-11-08 07:42:57.996397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.086 [2024-11-08 07:42:57.996402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.086 [2024-11-08 07:42:57.996406] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996410] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1498750): datao=0, datal=4096, cccid=4 00:15:40.086 [2024-11-08 07:42:57.996415] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14fcd40) on tqpair(0x1498750): expected_datao=0, payload_size=4096 00:15:40.086 [2024-11-08 07:42:57.996419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996426] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996429] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.086 [2024-11-08 07:42:57.996443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.086 [2024-11-08 07:42:57.996446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcd40) on tqpair=0x1498750 00:15:40.086 [2024-11-08 07:42:57.996462] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:15:40.086 [2024-11-08 07:42:57.996487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1498750) 00:15:40.086 [2024-11-08 07:42:57.996498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.086 [2024-11-08 07:42:57.996504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1498750) 00:15:40.086 [2024-11-08 07:42:57.996518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.086 [2024-11-08 07:42:57.996535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcd40, cid 4, qid 0 00:15:40.086 [2024-11-08 07:42:57.996540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcec0, cid 5, qid 0 00:15:40.086 [2024-11-08 07:42:57.996622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.086 [2024-11-08 07:42:57.996628] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.086 [2024-11-08 07:42:57.996631] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996635] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1498750): datao=0, datal=1024, cccid=4 00:15:40.086 [2024-11-08 07:42:57.996640] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14fcd40) on tqpair(0x1498750): expected_datao=0, payload_size=1024 00:15:40.086 [2024-11-08 07:42:57.996645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996651] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996655] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.086 [2024-11-08 07:42:57.996666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.086 [2024-11-08 07:42:57.996669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcec0) on tqpair=0x1498750 00:15:40.086 [2024-11-08 07:42:57.996687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.086 [2024-11-08 07:42:57.996692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.086 [2024-11-08 07:42:57.996696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcd40) on tqpair=0x1498750 00:15:40.086 [2024-11-08 07:42:57.996724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.086 [2024-11-08 07:42:57.996732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1498750) 00:15:40.087 [2024-11-08 07:42:57.996738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.087 [2024-11-08 07:42:57.996759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcd40, cid 4, qid 0 00:15:40.087 [2024-11-08 07:42:57.996821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.087 [2024-11-08 07:42:57.996826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.087 [2024-11-08 07:42:57.996830] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.087 [2024-11-08 07:42:57.996834] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1498750): datao=0, datal=3072, cccid=4 00:15:40.087 [2024-11-08 07:42:57.996839] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14fcd40) on tqpair(0x1498750): expected_datao=0, payload_size=3072 00:15:40.087 [2024-11-08 07:42:57.996844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.087 [2024-11-08 07:42:57.996850] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.087 [2024-11-08 07:42:57.996854] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.087 [2024-11-08 07:42:57.996861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.087 [2024-11-08 07:42:57.996867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.087 [2024-11-08 07:42:57.996871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.087 [2024-11-08 07:42:57.996875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcd40) on tqpair=0x1498750 00:15:40.087 [2024-11-08 07:42:57.996883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.087 [2024-11-08 07:42:57.996887] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1498750) 00:15:40.087 [2024-11-08 07:42:57.996893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.087 [2024-11-08 07:42:57.996909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcd40, cid 4, qid 0 00:15:40.087 [2024-11-08 07:42:57.996952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.087 [2024-11-08 07:42:57.996958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.087 [2024-11-08 07:42:57.996962] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.087 ===================================================== 00:15:40.087 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:40.087 ===================================================== 00:15:40.087 Controller Capabilities/Features 00:15:40.087 ================================ 00:15:40.087 Vendor ID: 0000 00:15:40.087 Subsystem Vendor ID: 0000 00:15:40.087 Serial Number: .................... 00:15:40.087 Model Number: ........................................ 00:15:40.087 Firmware Version: 25.01 00:15:40.087 Recommended Arb Burst: 0 00:15:40.087 IEEE OUI Identifier: 00 00 00 00:15:40.087 Multi-path I/O 00:15:40.087 May have multiple subsystem ports: No 00:15:40.087 May have multiple controllers: No 00:15:40.087 Associated with SR-IOV VF: No 00:15:40.087 Max Data Transfer Size: 131072 00:15:40.087 Max Number of Namespaces: 0 00:15:40.087 Max Number of I/O Queues: 1024 00:15:40.087 NVMe Specification Version (VS): 1.3 00:15:40.087 NVMe Specification Version (Identify): 1.3 00:15:40.087 Maximum Queue Entries: 128 00:15:40.087 Contiguous Queues Required: Yes 00:15:40.087 Arbitration Mechanisms Supported 00:15:40.087 Weighted Round Robin: Not Supported 00:15:40.087 Vendor Specific: Not Supported 00:15:40.087 Reset Timeout: 15000 ms 00:15:40.087 Doorbell Stride: 4 bytes 00:15:40.087 NVM Subsystem Reset: Not Supported 00:15:40.087 Command Sets Supported 00:15:40.087 NVM Command Set: Supported 00:15:40.087 Boot Partition: Not Supported 00:15:40.087 Memory Page Size Minimum: 4096 bytes 00:15:40.087 Memory Page Size Maximum: 4096 bytes 00:15:40.087 Persistent Memory Region: Not Supported 00:15:40.087 Optional Asynchronous Events Supported 00:15:40.087 Namespace Attribute Notices: Not Supported 00:15:40.087 Firmware Activation Notices: Not Supported 00:15:40.087 ANA Change Notices: Not Supported 00:15:40.087 PLE Aggregate Log Change Notices: Not Supported 00:15:40.087 LBA Status Info Alert Notices: Not Supported 00:15:40.087 EGE Aggregate Log Change Notices: Not Supported 00:15:40.087 Normal NVM Subsystem Shutdown event: Not Supported 00:15:40.087 Zone Descriptor Change Notices: Not Supported 00:15:40.087 Discovery Log Change Notices: Supported 00:15:40.087 Controller Attributes 00:15:40.087 128-bit Host Identifier: Not Supported 00:15:40.087 Non-Operational Permissive Mode: Not Supported 00:15:40.087 NVM Sets: Not Supported 00:15:40.087 Read Recovery Levels: Not Supported 00:15:40.087 Endurance Groups: Not Supported 00:15:40.087 Predictable Latency Mode: Not Supported 00:15:40.087 Traffic Based Keep ALive: Not Supported 00:15:40.087 Namespace Granularity: Not Supported 00:15:40.087 SQ Associations: Not Supported 00:15:40.087 UUID List: Not Supported 00:15:40.087 Multi-Domain Subsystem: Not Supported 00:15:40.087 Fixed Capacity Management: Not Supported 00:15:40.087 Variable Capacity Management: Not Supported 00:15:40.087 Delete Endurance Group: Not Supported 00:15:40.087 Delete NVM Set: Not Supported 00:15:40.087 Extended LBA Formats Supported: Not Supported 00:15:40.087 Flexible Data Placement Supported: Not Supported 00:15:40.087 00:15:40.087 Controller Memory Buffer Support 00:15:40.087 ================================ 00:15:40.087 Supported: No 00:15:40.087 00:15:40.087 Persistent Memory Region Support 00:15:40.087 ================================ 00:15:40.087 Supported: No 00:15:40.087 00:15:40.087 Admin Command Set Attributes 00:15:40.087 ============================ 00:15:40.087 Security Send/Receive: Not Supported 00:15:40.087 Format NVM: Not Supported 00:15:40.087 Firmware Activate/Download: Not Supported 00:15:40.087 Namespace Management: Not Supported 00:15:40.087 Device Self-Test: Not Supported 00:15:40.087 Directives: Not Supported 00:15:40.087 NVMe-MI: Not Supported 00:15:40.087 Virtualization Management: Not Supported 00:15:40.087 Doorbell Buffer Config: Not Supported 00:15:40.087 Get LBA Status Capability: Not Supported 00:15:40.087 Command & Feature Lockdown Capability: Not Supported 00:15:40.087 Abort Command Limit: 1 00:15:40.087 Async Event Request Limit: 4 00:15:40.087 Number of Firmware Slots: N/A 00:15:40.087 Firmware Slot 1 Read-Only: N/A 00:15:40.087 Firm[2024-11-08 07:42:57.996965] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1498750): datao=0, datal=8, cccid=4 00:15:40.087 [2024-11-08 07:42:57.996971] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14fcd40) on tqpair(0x1498750): expected_datao=0, payload_size=8 00:15:40.087 [2024-11-08 07:42:57.996975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.087 [2024-11-08 07:42:57.996992] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.087 [2024-11-08 07:42:57.996996] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.087 [2024-11-08 07:42:57.997008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.087 [2024-11-08 07:42:57.997014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.087 [2024-11-08 07:42:57.997018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.087 [2024-11-08 07:42:57.997022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcd40) on tqpair=0x1498750 00:15:40.087 ware Activation Without Reset: N/A 00:15:40.087 Multiple Update Detection Support: N/A 00:15:40.087 Firmware Update Granularity: No Information Provided 00:15:40.087 Per-Namespace SMART Log: No 00:15:40.087 Asymmetric Namespace Access Log Page: Not Supported 00:15:40.087 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:40.087 Command Effects Log Page: Not Supported 00:15:40.087 Get Log Page Extended Data: Supported 00:15:40.087 Telemetry Log Pages: Not Supported 00:15:40.087 Persistent Event Log Pages: Not Supported 00:15:40.087 Supported Log Pages Log Page: May Support 00:15:40.087 Commands Supported & Effects Log Page: Not Supported 00:15:40.087 Feature Identifiers & Effects Log Page:May Support 00:15:40.087 NVMe-MI Commands & Effects Log Page: May Support 00:15:40.087 Data Area 4 for Telemetry Log: Not Supported 00:15:40.087 Error Log Page Entries Supported: 128 00:15:40.087 Keep Alive: Not Supported 00:15:40.087 00:15:40.087 NVM Command Set Attributes 00:15:40.087 ========================== 00:15:40.087 Submission Queue Entry Size 00:15:40.087 Max: 1 00:15:40.087 Min: 1 00:15:40.087 Completion Queue Entry Size 00:15:40.087 Max: 1 00:15:40.087 Min: 1 00:15:40.087 Number of Namespaces: 0 00:15:40.087 Compare Command: Not Supported 00:15:40.087 Write Uncorrectable Command: Not Supported 00:15:40.087 Dataset Management Command: Not Supported 00:15:40.087 Write Zeroes Command: Not Supported 00:15:40.087 Set Features Save Field: Not Supported 00:15:40.087 Reservations: Not Supported 00:15:40.087 Timestamp: Not Supported 00:15:40.087 Copy: Not Supported 00:15:40.087 Volatile Write Cache: Not Present 00:15:40.087 Atomic Write Unit (Normal): 1 00:15:40.087 Atomic Write Unit (PFail): 1 00:15:40.087 Atomic Compare & Write Unit: 1 00:15:40.087 Fused Compare & Write: Supported 00:15:40.087 Scatter-Gather List 00:15:40.087 SGL Command Set: Supported 00:15:40.087 SGL Keyed: Supported 00:15:40.087 SGL Bit Bucket Descriptor: Not Supported 00:15:40.087 SGL Metadata Pointer: Not Supported 00:15:40.087 Oversized SGL: Not Supported 00:15:40.087 SGL Metadata Address: Not Supported 00:15:40.087 SGL Offset: Supported 00:15:40.087 Transport SGL Data Block: Not Supported 00:15:40.088 Replay Protected Memory Block: Not Supported 00:15:40.088 00:15:40.088 Firmware Slot Information 00:15:40.088 ========================= 00:15:40.088 Active slot: 0 00:15:40.088 00:15:40.088 00:15:40.088 Error Log 00:15:40.088 ========= 00:15:40.088 00:15:40.088 Active Namespaces 00:15:40.088 ================= 00:15:40.088 Discovery Log Page 00:15:40.088 ================== 00:15:40.088 Generation Counter: 2 00:15:40.088 Number of Records: 2 00:15:40.088 Record Format: 0 00:15:40.088 00:15:40.088 Discovery Log Entry 0 00:15:40.088 ---------------------- 00:15:40.088 Transport Type: 3 (TCP) 00:15:40.088 Address Family: 1 (IPv4) 00:15:40.088 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:40.088 Entry Flags: 00:15:40.088 Duplicate Returned Information: 1 00:15:40.088 Explicit Persistent Connection Support for Discovery: 1 00:15:40.088 Transport Requirements: 00:15:40.088 Secure Channel: Not Required 00:15:40.088 Port ID: 0 (0x0000) 00:15:40.088 Controller ID: 65535 (0xffff) 00:15:40.088 Admin Max SQ Size: 128 00:15:40.088 Transport Service Identifier: 4420 00:15:40.088 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:40.088 Transport Address: 10.0.0.3 00:15:40.088 Discovery Log Entry 1 00:15:40.088 ---------------------- 00:15:40.088 Transport Type: 3 (TCP) 00:15:40.088 Address Family: 1 (IPv4) 00:15:40.088 Subsystem Type: 2 (NVM Subsystem) 00:15:40.088 Entry Flags: 00:15:40.088 Duplicate Returned Information: 0 00:15:40.088 Explicit Persistent Connection Support for Discovery: 0 00:15:40.088 Transport Requirements: 00:15:40.088 Secure Channel: Not Required 00:15:40.088 Port ID: 0 (0x0000) 00:15:40.088 Controller ID: 65535 (0xffff) 00:15:40.088 Admin Max SQ Size: 128 00:15:40.088 Transport Service Identifier: 4420 00:15:40.088 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:40.088 Transport Address: 10.0.0.3 [2024-11-08 07:42:57.997118] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:15:40.088 [2024-11-08 07:42:57.997129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fc740) on tqpair=0x1498750 00:15:40.088 [2024-11-08 07:42:57.997135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.088 [2024-11-08 07:42:57.997142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fc8c0) on tqpair=0x1498750 00:15:40.088 [2024-11-08 07:42:57.997147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.088 [2024-11-08 07:42:57.997152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fca40) on tqpair=0x1498750 00:15:40.088 [2024-11-08 07:42:57.997157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.088 [2024-11-08 07:42:57.997163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.088 [2024-11-08 07:42:57.997168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.088 [2024-11-08 07:42:57.997177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.088 [2024-11-08 07:42:57.997193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.088 [2024-11-08 07:42:57.997210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.088 [2024-11-08 07:42:57.997255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.088 [2024-11-08 07:42:57.997262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.088 [2024-11-08 07:42:57.997266] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.088 [2024-11-08 07:42:57.997278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.088 [2024-11-08 07:42:57.997293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.088 [2024-11-08 07:42:57.997310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.088 [2024-11-08 07:42:57.997366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.088 [2024-11-08 07:42:57.997373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.088 [2024-11-08 07:42:57.997377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.088 [2024-11-08 07:42:57.997386] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:15:40.088 [2024-11-08 07:42:57.997392] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:15:40.088 [2024-11-08 07:42:57.997402] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.088 [2024-11-08 07:42:57.997417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.088 [2024-11-08 07:42:57.997431] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.088 [2024-11-08 07:42:57.997470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.088 [2024-11-08 07:42:57.997476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.088 [2024-11-08 07:42:57.997480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.088 [2024-11-08 07:42:57.997494] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.088 [2024-11-08 07:42:57.997510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.088 [2024-11-08 07:42:57.997523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.088 [2024-11-08 07:42:57.997562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.088 [2024-11-08 07:42:57.997569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.088 [2024-11-08 07:42:57.997573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.088 [2024-11-08 07:42:57.997586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.088 [2024-11-08 07:42:57.997602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.088 [2024-11-08 07:42:57.997615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.088 [2024-11-08 07:42:57.997660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.088 [2024-11-08 07:42:57.997666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.088 [2024-11-08 07:42:57.997671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.088 [2024-11-08 07:42:57.997684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.088 [2024-11-08 07:42:57.997700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.088 [2024-11-08 07:42:57.997713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.088 [2024-11-08 07:42:57.997752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.088 [2024-11-08 07:42:57.997758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.088 [2024-11-08 07:42:57.997762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.088 [2024-11-08 07:42:57.997776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.088 [2024-11-08 07:42:57.997792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.088 [2024-11-08 07:42:57.997805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.088 [2024-11-08 07:42:57.997844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.088 [2024-11-08 07:42:57.997850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.088 [2024-11-08 07:42:57.997854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.088 [2024-11-08 07:42:57.997868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.088 [2024-11-08 07:42:57.997876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.088 [2024-11-08 07:42:57.997883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.088 [2024-11-08 07:42:57.997897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.089 [2024-11-08 07:42:57.997933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.089 [2024-11-08 07:42:57.997939] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.089 [2024-11-08 07:42:57.997943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.997948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.089 [2024-11-08 07:42:57.997957] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.997962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.997966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.089 [2024-11-08 07:42:57.997973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.089 [2024-11-08 07:42:57.997986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.089 [2024-11-08 07:42:57.998033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.089 [2024-11-08 07:42:57.998040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.089 [2024-11-08 07:42:57.998044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.089 [2024-11-08 07:42:57.998058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.089 [2024-11-08 07:42:57.998074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.089 [2024-11-08 07:42:57.998088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.089 [2024-11-08 07:42:57.998133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.089 [2024-11-08 07:42:57.998139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.089 [2024-11-08 07:42:57.998143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.089 [2024-11-08 07:42:57.998157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.089 [2024-11-08 07:42:57.998172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.089 [2024-11-08 07:42:57.998186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.089 [2024-11-08 07:42:57.998221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.089 [2024-11-08 07:42:57.998228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.089 [2024-11-08 07:42:57.998232] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.089 [2024-11-08 07:42:57.998245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.089 [2024-11-08 07:42:57.998261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.089 [2024-11-08 07:42:57.998274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.089 [2024-11-08 07:42:57.998313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.089 [2024-11-08 07:42:57.998319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.089 [2024-11-08 07:42:57.998323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.089 [2024-11-08 07:42:57.998337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.089 [2024-11-08 07:42:57.998352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.089 [2024-11-08 07:42:57.998365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.089 [2024-11-08 07:42:57.998404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.089 [2024-11-08 07:42:57.998410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.089 [2024-11-08 07:42:57.998414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.089 [2024-11-08 07:42:57.998428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.089 [2024-11-08 07:42:57.998443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.089 [2024-11-08 07:42:57.998456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.089 [2024-11-08 07:42:57.998495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.089 [2024-11-08 07:42:57.998501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.089 [2024-11-08 07:42:57.998505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.089 [2024-11-08 07:42:57.998519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.089 [2024-11-08 07:42:57.998534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.089 [2024-11-08 07:42:57.998548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.089 [2024-11-08 07:42:57.998584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.089 [2024-11-08 07:42:57.998590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.089 [2024-11-08 07:42:57.998594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.089 [2024-11-08 07:42:57.998607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.089 [2024-11-08 07:42:57.998623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.089 [2024-11-08 07:42:57.998636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.089 [2024-11-08 07:42:57.998689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.089 [2024-11-08 07:42:57.998696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.089 [2024-11-08 07:42:57.998700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.089 [2024-11-08 07:42:57.998714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.089 [2024-11-08 07:42:57.998729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.089 [2024-11-08 07:42:57.998743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.089 [2024-11-08 07:42:57.998798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.089 [2024-11-08 07:42:57.998803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.089 [2024-11-08 07:42:57.998807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.089 [2024-11-08 07:42:57.998819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.089 [2024-11-08 07:42:57.998827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.090 [2024-11-08 07:42:57.998833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.090 [2024-11-08 07:42:57.998846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.090 [2024-11-08 07:42:57.998881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.090 [2024-11-08 07:42:57.998887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.090 [2024-11-08 07:42:57.998890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.090 [2024-11-08 07:42:57.998894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.090 [2024-11-08 07:42:57.998903] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.090 [2024-11-08 07:42:57.998907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.090 [2024-11-08 07:42:57.998911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.090 [2024-11-08 07:42:57.998917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.090 [2024-11-08 07:42:57.998929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.090 [2024-11-08 07:42:57.998967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.090 [2024-11-08 07:42:57.998973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.090 [2024-11-08 07:42:57.998976] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.090 [2024-11-08 07:42:57.998980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.090 [2024-11-08 07:42:58.002989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.090 [2024-11-08 07:42:58.003004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.090 [2024-11-08 07:42:58.003009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1498750) 00:15:40.090 [2024-11-08 07:42:58.003016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.090 [2024-11-08 07:42:58.003034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14fcbc0, cid 3, qid 0 00:15:40.090 [2024-11-08 07:42:58.003077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.090 [2024-11-08 07:42:58.003083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.090 [2024-11-08 07:42:58.003087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.090 [2024-11-08 07:42:58.003091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14fcbc0) on tqpair=0x1498750 00:15:40.090 [2024-11-08 07:42:58.003099] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:15:40.090 00:15:40.090 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:40.090 [2024-11-08 07:42:58.039680] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:40.090 [2024-11-08 07:42:58.039718] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73899 ] 00:15:40.353 [2024-11-08 07:42:58.188325] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:15:40.353 [2024-11-08 07:42:58.188374] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:40.353 [2024-11-08 07:42:58.188380] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:40.353 [2024-11-08 07:42:58.188391] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:40.353 [2024-11-08 07:42:58.188400] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:40.353 [2024-11-08 07:42:58.188678] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:15:40.353 [2024-11-08 07:42:58.188758] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19b6750 0 00:15:40.353 [2024-11-08 07:42:58.195998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:40.353 [2024-11-08 07:42:58.196016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:40.353 [2024-11-08 07:42:58.196022] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:40.353 [2024-11-08 07:42:58.196026] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:40.353 [2024-11-08 07:42:58.196057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.196063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.196068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6750) 00:15:40.353 [2024-11-08 07:42:58.196080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:40.353 [2024-11-08 07:42:58.196107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a740, cid 0, qid 0 00:15:40.353 [2024-11-08 07:42:58.203018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.353 [2024-11-08 07:42:58.203033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.353 [2024-11-08 07:42:58.203038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a740) on tqpair=0x19b6750 00:15:40.353 [2024-11-08 07:42:58.203055] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:40.353 [2024-11-08 07:42:58.203063] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:15:40.353 [2024-11-08 07:42:58.203069] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:15:40.353 [2024-11-08 07:42:58.203082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6750) 00:15:40.353 [2024-11-08 07:42:58.203098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.353 [2024-11-08 07:42:58.203118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a740, cid 0, qid 0 00:15:40.353 [2024-11-08 07:42:58.203158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.353 [2024-11-08 07:42:58.203164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.353 [2024-11-08 07:42:58.203168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a740) on tqpair=0x19b6750 00:15:40.353 [2024-11-08 07:42:58.203178] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:15:40.353 [2024-11-08 07:42:58.203185] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:15:40.353 [2024-11-08 07:42:58.203192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6750) 00:15:40.353 [2024-11-08 07:42:58.203206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.353 [2024-11-08 07:42:58.203220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a740, cid 0, qid 0 00:15:40.353 [2024-11-08 07:42:58.203253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.353 [2024-11-08 07:42:58.203258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.353 [2024-11-08 07:42:58.203262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a740) on tqpair=0x19b6750 00:15:40.353 [2024-11-08 07:42:58.203271] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:15:40.353 [2024-11-08 07:42:58.203279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:40.353 [2024-11-08 07:42:58.203286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6750) 00:15:40.353 [2024-11-08 07:42:58.203300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.353 [2024-11-08 07:42:58.203313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a740, cid 0, qid 0 00:15:40.353 [2024-11-08 07:42:58.203347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.353 [2024-11-08 07:42:58.203353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.353 [2024-11-08 07:42:58.203356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a740) on tqpair=0x19b6750 00:15:40.353 [2024-11-08 07:42:58.203366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:40.353 [2024-11-08 07:42:58.203374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6750) 00:15:40.353 [2024-11-08 07:42:58.203388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.353 [2024-11-08 07:42:58.203401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a740, cid 0, qid 0 00:15:40.353 [2024-11-08 07:42:58.203445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.353 [2024-11-08 07:42:58.203451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.353 [2024-11-08 07:42:58.203454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a740) on tqpair=0x19b6750 00:15:40.353 [2024-11-08 07:42:58.203463] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:40.353 [2024-11-08 07:42:58.203468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:40.353 [2024-11-08 07:42:58.203476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:40.353 [2024-11-08 07:42:58.203585] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:15:40.353 [2024-11-08 07:42:58.203591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:40.353 [2024-11-08 07:42:58.203600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6750) 00:15:40.353 [2024-11-08 07:42:58.203614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.353 [2024-11-08 07:42:58.203627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a740, cid 0, qid 0 00:15:40.353 [2024-11-08 07:42:58.203666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.353 [2024-11-08 07:42:58.203671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.353 [2024-11-08 07:42:58.203675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a740) on tqpair=0x19b6750 00:15:40.353 [2024-11-08 07:42:58.203684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:40.353 [2024-11-08 07:42:58.203693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6750) 00:15:40.353 [2024-11-08 07:42:58.203707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.353 [2024-11-08 07:42:58.203720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a740, cid 0, qid 0 00:15:40.353 [2024-11-08 07:42:58.203758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.353 [2024-11-08 07:42:58.203764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.353 [2024-11-08 07:42:58.203768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.353 [2024-11-08 07:42:58.203772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a740) on tqpair=0x19b6750 00:15:40.354 [2024-11-08 07:42:58.203776] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:40.354 [2024-11-08 07:42:58.203782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:40.354 [2024-11-08 07:42:58.203789] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:15:40.354 [2024-11-08 07:42:58.203803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:40.354 [2024-11-08 07:42:58.203812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.203816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6750) 00:15:40.354 [2024-11-08 07:42:58.203822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.354 [2024-11-08 07:42:58.203836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a740, cid 0, qid 0 00:15:40.354 [2024-11-08 07:42:58.203914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.354 [2024-11-08 07:42:58.203920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.354 [2024-11-08 07:42:58.203923] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.203928] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6750): datao=0, datal=4096, cccid=0 00:15:40.354 [2024-11-08 07:42:58.203933] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1a740) on tqpair(0x19b6750): expected_datao=0, payload_size=4096 00:15:40.354 [2024-11-08 07:42:58.203938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.203945] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.203949] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.203957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.354 [2024-11-08 07:42:58.203963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.354 [2024-11-08 07:42:58.203967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.203971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a740) on tqpair=0x19b6750 00:15:40.354 [2024-11-08 07:42:58.203988] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:15:40.354 [2024-11-08 07:42:58.203994] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:15:40.354 [2024-11-08 07:42:58.203999] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:15:40.354 [2024-11-08 07:42:58.204004] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:15:40.354 [2024-11-08 07:42:58.204009] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:15:40.354 [2024-11-08 07:42:58.204014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:15:40.354 [2024-11-08 07:42:58.204026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:40.354 [2024-11-08 07:42:58.204033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6750) 00:15:40.354 [2024-11-08 07:42:58.204048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:40.354 [2024-11-08 07:42:58.204062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a740, cid 0, qid 0 00:15:40.354 [2024-11-08 07:42:58.204099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.354 [2024-11-08 07:42:58.204104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.354 [2024-11-08 07:42:58.204108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a740) on tqpair=0x19b6750 00:15:40.354 [2024-11-08 07:42:58.204119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19b6750) 00:15:40.354 [2024-11-08 07:42:58.204133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.354 [2024-11-08 07:42:58.204139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204143] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19b6750) 00:15:40.354 [2024-11-08 07:42:58.204153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.354 [2024-11-08 07:42:58.204159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204167] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19b6750) 00:15:40.354 [2024-11-08 07:42:58.204173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.354 [2024-11-08 07:42:58.204179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6750) 00:15:40.354 [2024-11-08 07:42:58.204192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.354 [2024-11-08 07:42:58.204197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:40.354 [2024-11-08 07:42:58.204208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:40.354 [2024-11-08 07:42:58.204215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b6750) 00:15:40.354 [2024-11-08 07:42:58.204225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.354 [2024-11-08 07:42:58.204239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a740, cid 0, qid 0 00:15:40.354 [2024-11-08 07:42:58.204245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1a8c0, cid 1, qid 0 00:15:40.354 [2024-11-08 07:42:58.204249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1aa40, cid 2, qid 0 00:15:40.354 [2024-11-08 07:42:58.204254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1abc0, cid 3, qid 0 00:15:40.354 [2024-11-08 07:42:58.204259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ad40, cid 4, qid 0 00:15:40.354 [2024-11-08 07:42:58.204331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.354 [2024-11-08 07:42:58.204337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.354 [2024-11-08 07:42:58.204341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ad40) on tqpair=0x19b6750 00:15:40.354 [2024-11-08 07:42:58.204351] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:15:40.354 [2024-11-08 07:42:58.204356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:40.354 [2024-11-08 07:42:58.204364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:15:40.354 [2024-11-08 07:42:58.204374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:40.354 [2024-11-08 07:42:58.204380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b6750) 00:15:40.354 [2024-11-08 07:42:58.204394] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:40.354 [2024-11-08 07:42:58.204407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ad40, cid 4, qid 0 00:15:40.354 [2024-11-08 07:42:58.204449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.354 [2024-11-08 07:42:58.204454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.354 [2024-11-08 07:42:58.204458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ad40) on tqpair=0x19b6750 00:15:40.354 [2024-11-08 07:42:58.204514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:15:40.354 [2024-11-08 07:42:58.204523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:40.354 [2024-11-08 07:42:58.204531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b6750) 00:15:40.354 [2024-11-08 07:42:58.204541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.354 [2024-11-08 07:42:58.204554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ad40, cid 4, qid 0 00:15:40.354 [2024-11-08 07:42:58.204598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.354 [2024-11-08 07:42:58.204604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.354 [2024-11-08 07:42:58.204607] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204611] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6750): datao=0, datal=4096, cccid=4 00:15:40.354 [2024-11-08 07:42:58.204616] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ad40) on tqpair(0x19b6750): expected_datao=0, payload_size=4096 00:15:40.354 [2024-11-08 07:42:58.204621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204628] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204632] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.354 [2024-11-08 07:42:58.204645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.354 [2024-11-08 07:42:58.204649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.354 [2024-11-08 07:42:58.204653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ad40) on tqpair=0x19b6750 00:15:40.354 [2024-11-08 07:42:58.204665] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:15:40.354 [2024-11-08 07:42:58.204675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:15:40.355 [2024-11-08 07:42:58.204683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:15:40.355 [2024-11-08 07:42:58.204690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.204694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b6750) 00:15:40.355 [2024-11-08 07:42:58.204700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.355 [2024-11-08 07:42:58.204714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ad40, cid 4, qid 0 00:15:40.355 [2024-11-08 07:42:58.204821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.355 [2024-11-08 07:42:58.204827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.355 [2024-11-08 07:42:58.204830] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.204834] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6750): datao=0, datal=4096, cccid=4 00:15:40.355 [2024-11-08 07:42:58.204839] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ad40) on tqpair(0x19b6750): expected_datao=0, payload_size=4096 00:15:40.355 [2024-11-08 07:42:58.204844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.204850] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.204854] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.204861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.355 [2024-11-08 07:42:58.204867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.355 [2024-11-08 07:42:58.204871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.204875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ad40) on tqpair=0x19b6750 00:15:40.355 [2024-11-08 07:42:58.204890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:40.355 [2024-11-08 07:42:58.204899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:40.355 [2024-11-08 07:42:58.204906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.204910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b6750) 00:15:40.355 [2024-11-08 07:42:58.204916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.355 [2024-11-08 07:42:58.204929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ad40, cid 4, qid 0 00:15:40.355 [2024-11-08 07:42:58.204971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.355 [2024-11-08 07:42:58.204989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.355 [2024-11-08 07:42:58.204993] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.204997] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6750): datao=0, datal=4096, cccid=4 00:15:40.355 [2024-11-08 07:42:58.205002] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ad40) on tqpair(0x19b6750): expected_datao=0, payload_size=4096 00:15:40.355 [2024-11-08 07:42:58.205007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205013] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205017] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.355 [2024-11-08 07:42:58.205031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.355 [2024-11-08 07:42:58.205034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ad40) on tqpair=0x19b6750 00:15:40.355 [2024-11-08 07:42:58.205046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:40.355 [2024-11-08 07:42:58.205054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:15:40.355 [2024-11-08 07:42:58.205063] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:15:40.355 [2024-11-08 07:42:58.205070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:40.355 [2024-11-08 07:42:58.205075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:40.355 [2024-11-08 07:42:58.205081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:15:40.355 [2024-11-08 07:42:58.205086] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:15:40.355 [2024-11-08 07:42:58.205091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:15:40.355 [2024-11-08 07:42:58.205096] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:15:40.355 [2024-11-08 07:42:58.205112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b6750) 00:15:40.355 [2024-11-08 07:42:58.205122] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.355 [2024-11-08 07:42:58.205129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b6750) 00:15:40.355 [2024-11-08 07:42:58.205142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.355 [2024-11-08 07:42:58.205161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ad40, cid 4, qid 0 00:15:40.355 [2024-11-08 07:42:58.205166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1aec0, cid 5, qid 0 00:15:40.355 [2024-11-08 07:42:58.205214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.355 [2024-11-08 07:42:58.205219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.355 [2024-11-08 07:42:58.205223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ad40) on tqpair=0x19b6750 00:15:40.355 [2024-11-08 07:42:58.205233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.355 [2024-11-08 07:42:58.205239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.355 [2024-11-08 07:42:58.205242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1aec0) on tqpair=0x19b6750 00:15:40.355 [2024-11-08 07:42:58.205256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b6750) 00:15:40.355 [2024-11-08 07:42:58.205266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.355 [2024-11-08 07:42:58.205278] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1aec0, cid 5, qid 0 00:15:40.355 [2024-11-08 07:42:58.205317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.355 [2024-11-08 07:42:58.205323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.355 [2024-11-08 07:42:58.205327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1aec0) on tqpair=0x19b6750 00:15:40.355 [2024-11-08 07:42:58.205340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b6750) 00:15:40.355 [2024-11-08 07:42:58.205350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.355 [2024-11-08 07:42:58.205363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1aec0, cid 5, qid 0 00:15:40.355 [2024-11-08 07:42:58.205410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.355 [2024-11-08 07:42:58.205416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.355 [2024-11-08 07:42:58.205420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1aec0) on tqpair=0x19b6750 00:15:40.355 [2024-11-08 07:42:58.205433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b6750) 00:15:40.355 [2024-11-08 07:42:58.205443] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.355 [2024-11-08 07:42:58.205455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1aec0, cid 5, qid 0 00:15:40.355 [2024-11-08 07:42:58.205494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.355 [2024-11-08 07:42:58.205500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.355 [2024-11-08 07:42:58.205504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1aec0) on tqpair=0x19b6750 00:15:40.355 [2024-11-08 07:42:58.205522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19b6750) 00:15:40.355 [2024-11-08 07:42:58.205533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.355 [2024-11-08 07:42:58.205539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19b6750) 00:15:40.355 [2024-11-08 07:42:58.205549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.355 [2024-11-08 07:42:58.205557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19b6750) 00:15:40.355 [2024-11-08 07:42:58.205566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.355 [2024-11-08 07:42:58.205574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.355 [2024-11-08 07:42:58.205578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19b6750) 00:15:40.355 [2024-11-08 07:42:58.205584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.355 [2024-11-08 07:42:58.205598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1aec0, cid 5, qid 0 00:15:40.355 [2024-11-08 07:42:58.205603] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1ad40, cid 4, qid 0 00:15:40.355 [2024-11-08 07:42:58.205608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1b040, cid 6, qid 0 00:15:40.356 [2024-11-08 07:42:58.205613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1b1c0, cid 7, qid 0 00:15:40.356 [2024-11-08 07:42:58.205726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.356 [2024-11-08 07:42:58.205732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.356 [2024-11-08 07:42:58.205735] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205739] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6750): datao=0, datal=8192, cccid=5 00:15:40.356 [2024-11-08 07:42:58.205744] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1aec0) on tqpair(0x19b6750): expected_datao=0, payload_size=8192 00:15:40.356 [2024-11-08 07:42:58.205749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205763] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205767] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.356 [2024-11-08 07:42:58.205778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.356 [2024-11-08 07:42:58.205782] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205786] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6750): datao=0, datal=512, cccid=4 00:15:40.356 [2024-11-08 07:42:58.205791] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1ad40) on tqpair(0x19b6750): expected_datao=0, payload_size=512 00:15:40.356 [2024-11-08 07:42:58.205795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205801] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205805] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.356 [2024-11-08 07:42:58.205816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.356 [2024-11-08 07:42:58.205819] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205823] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6750): datao=0, datal=512, cccid=6 00:15:40.356 [2024-11-08 07:42:58.205828] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1b040) on tqpair(0x19b6750): expected_datao=0, payload_size=512 00:15:40.356 [2024-11-08 07:42:58.205833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205839] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205842] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.356 [2024-11-08 07:42:58.205853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.356 [2024-11-08 07:42:58.205857] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205861] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19b6750): datao=0, datal=4096, cccid=7 00:15:40.356 [2024-11-08 07:42:58.205865] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a1b1c0) on tqpair(0x19b6750): expected_datao=0, payload_size=4096 00:15:40.356 [2024-11-08 07:42:58.205870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205877] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205880] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.356 [2024-11-08 07:42:58.205891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.356 [2024-11-08 07:42:58.205895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1aec0) on tqpair=0x19b6750 00:15:40.356 [2024-11-08 07:42:58.205912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.356 [2024-11-08 07:42:58.205918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.356 [2024-11-08 07:42:58.205921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1ad40) on tqpair=0x19b6750 00:15:40.356 [2024-11-08 07:42:58.205937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.356 [2024-11-08 07:42:58.205943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.356 [2024-11-08 07:42:58.205947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.356 [2024-11-08 07:42:58.205951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1b040) on tqpair=0x19b6750 00:15:40.356 [2024-11-08 07:42:58.205958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.356 [2024-11-08 07:42:58.205963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.356 [2024-11-08 07:42:58.205967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.356 ===================================================== 00:15:40.356 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:40.356 ===================================================== 00:15:40.356 Controller Capabilities/Features 00:15:40.356 ================================ 00:15:40.356 Vendor ID: 8086 00:15:40.356 Subsystem Vendor ID: 8086 00:15:40.356 Serial Number: SPDK00000000000001 00:15:40.356 Model Number: SPDK bdev Controller 00:15:40.356 Firmware Version: 25.01 00:15:40.356 Recommended Arb Burst: 6 00:15:40.356 IEEE OUI Identifier: e4 d2 5c 00:15:40.356 Multi-path I/O 00:15:40.356 May have multiple subsystem ports: Yes 00:15:40.356 May have multiple controllers: Yes 00:15:40.356 Associated with SR-IOV VF: No 00:15:40.356 Max Data Transfer Size: 131072 00:15:40.356 Max Number of Namespaces: 32 00:15:40.356 Max Number of I/O Queues: 127 00:15:40.356 NVMe Specification Version (VS): 1.3 00:15:40.356 NVMe Specification Version (Identify): 1.3 00:15:40.356 Maximum Queue Entries: 128 00:15:40.356 Contiguous Queues Required: Yes 00:15:40.356 Arbitration Mechanisms Supported 00:15:40.356 Weighted Round Robin: Not Supported 00:15:40.356 Vendor Specific: Not Supported 00:15:40.356 Reset Timeout: 15000 ms 00:15:40.356 Doorbell Stride: 4 bytes 00:15:40.356 NVM Subsystem Reset: Not Supported 00:15:40.356 Command Sets Supported 00:15:40.356 NVM Command Set: Supported 00:15:40.356 Boot Partition: Not Supported 00:15:40.356 Memory Page Size Minimum: 4096 bytes 00:15:40.356 Memory Page Size Maximum: 4096 bytes 00:15:40.356 Persistent Memory Region: Not Supported 00:15:40.356 Optional Asynchronous Events Supported 00:15:40.356 Namespace Attribute Notices: Supported 00:15:40.356 Firmware Activation Notices: Not Supported 00:15:40.356 ANA Change Notices: Not Supported 00:15:40.356 PLE Aggregate Log Change Notices: Not Supported 00:15:40.356 LBA Status Info Alert Notices: Not Supported 00:15:40.356 EGE Aggregate Log Change Notices: Not Supported 00:15:40.356 Normal NVM Subsystem Shutdown event: Not Supported 00:15:40.356 Zone Descriptor Change Notices: Not Supported 00:15:40.356 Discovery Log Change Notices: Not Supported 00:15:40.356 Controller Attributes 00:15:40.356 128-bit Host Identifier: Supported 00:15:40.356 Non-Operational Permissive Mode: Not Supported 00:15:40.356 NVM Sets: Not Supported 00:15:40.356 Read Recovery Levels: Not Supported 00:15:40.356 Endurance Groups: Not Supported 00:15:40.356 Predictable Latency Mode: Not Supported 00:15:40.356 Traffic Based Keep ALive: Not Supported 00:15:40.356 Namespace Granularity: Not Supported 00:15:40.356 SQ Associations: Not Supported 00:15:40.356 UUID List: Not Supported 00:15:40.356 Multi-Domain Subsystem: Not Supported 00:15:40.356 Fixed Capacity Management: Not Supported 00:15:40.356 Variable Capacity Management: Not Supported 00:15:40.356 Delete Endurance Group: Not Supported 00:15:40.356 Delete NVM Set: Not Supported 00:15:40.356 Extended LBA Formats Supported: Not Supported 00:15:40.356 Flexible Data Placement Supported: Not Supported 00:15:40.356 00:15:40.356 Controller Memory Buffer Support 00:15:40.356 ================================ 00:15:40.356 Supported: No 00:15:40.356 00:15:40.356 Persistent Memory Region Support 00:15:40.356 ================================ 00:15:40.356 Supported: No 00:15:40.356 00:15:40.356 Admin Command Set Attributes 00:15:40.356 ============================ 00:15:40.356 Security Send/Receive: Not Supported 00:15:40.356 Format NVM: Not Supported 00:15:40.356 Firmware Activate/Download: Not Supported 00:15:40.356 Namespace Management: Not Supported 00:15:40.356 Device Self-Test: Not Supported 00:15:40.356 Directives: Not Supported 00:15:40.356 NVMe-MI: Not Supported 00:15:40.356 Virtualization Management: Not Supported 00:15:40.356 Doorbell Buffer Config: Not Supported 00:15:40.356 Get LBA Status Capability: Not Supported 00:15:40.356 Command & Feature Lockdown Capability: Not Supported 00:15:40.356 Abort Command Limit: 4 00:15:40.356 Async Event Request Limit: 4 00:15:40.356 Number of Firmware Slots: N/A 00:15:40.356 Firmware Slot 1 Read-Only: N/A 00:15:40.356 Firmware Activation Without Reset: N/A 00:15:40.356 Multiple Update Detection Support: N/A 00:15:40.356 Firmware Update Granularity: No Information Provided 00:15:40.356 Per-Namespace SMART Log: No 00:15:40.356 Asymmetric Namespace Access Log Page: Not Supported 00:15:40.356 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:40.356 Command Effects Log Page: Supported 00:15:40.356 Get Log Page Extended Data: Supported 00:15:40.356 Telemetry Log Pages: Not Supported 00:15:40.356 Persistent Event Log Pages: Not Supported 00:15:40.356 Supported Log Pages Log Page: May Support 00:15:40.356 Commands Supported & Effects Log Page: Not Supported 00:15:40.356 Feature Identifiers & Effects Log Page:May Support 00:15:40.356 NVMe-MI Commands & Effects Log Page: May Support 00:15:40.356 Data Area 4 for Telemetry Log: Not Supported 00:15:40.356 Error Log Page Entries Supported: 128 00:15:40.356 Keep Alive: Supported 00:15:40.356 Keep Alive Granularity: 10000 ms 00:15:40.356 00:15:40.356 NVM Command Set Attributes 00:15:40.357 ========================== 00:15:40.357 Submission Queue Entry Size 00:15:40.357 Max: 64 00:15:40.357 Min: 64 00:15:40.357 Completion Queue Entry Size 00:15:40.357 Max: 16 00:15:40.357 Min: 16 00:15:40.357 Number of Namespaces: 32 00:15:40.357 Compare Command: Supported 00:15:40.357 Write Uncorrectable Command: Not Supported 00:15:40.357 Dataset Management Command: Supported 00:15:40.357 Write Zeroes Command: Supported 00:15:40.357 Set Features Save Field: Not Supported 00:15:40.357 Reservations: Supported 00:15:40.357 Timestamp: Not Supported 00:15:40.357 Copy: Supported 00:15:40.357 Volatile Write Cache: Present 00:15:40.357 Atomic Write Unit (Normal): 1 00:15:40.357 Atomic Write Unit (PFail): 1 00:15:40.357 Atomic Compare & Write Unit: 1 00:15:40.357 Fused Compare & Write: Supported 00:15:40.357 Scatter-Gather List 00:15:40.357 SGL Command Set: Supported 00:15:40.357 SGL Keyed: Supported 00:15:40.357 SGL Bit Bucket Descriptor: Not Supported 00:15:40.357 SGL Metadata Pointer: Not Supported 00:15:40.357 Oversized SGL: Not Supported 00:15:40.357 SGL Metadata Address: Not Supported 00:15:40.357 SGL Offset: Supported 00:15:40.357 Transport SGL Data Block: Not Supported 00:15:40.357 Replay Protected Memory Block: Not Supported 00:15:40.357 00:15:40.357 Firmware Slot Information 00:15:40.357 ========================= 00:15:40.357 Active slot: 1 00:15:40.357 Slot 1 Firmware Revision: 25.01 00:15:40.357 00:15:40.357 00:15:40.357 Commands Supported and Effects 00:15:40.357 ============================== 00:15:40.357 Admin Commands 00:15:40.357 -------------- 00:15:40.357 Get Log Page (02h): Supported 00:15:40.357 Identify (06h): Supported 00:15:40.357 Abort (08h): Supported 00:15:40.357 Set Features (09h): Supported 00:15:40.357 Get Features (0Ah): Supported 00:15:40.357 Asynchronous Event Request (0Ch): Supported 00:15:40.357 Keep Alive (18h): Supported 00:15:40.357 I/O Commands 00:15:40.357 ------------ 00:15:40.357 Flush (00h): Supported LBA-Change 00:15:40.357 Write (01h): Supported LBA-Change 00:15:40.357 Read (02h): Supported 00:15:40.357 Compare (05h): Supported 00:15:40.357 Write Zeroes (08h): Supported LBA-Change 00:15:40.357 Dataset Management (09h): Supported LBA-Change 00:15:40.357 Copy (19h): Supported LBA-Change 00:15:40.357 00:15:40.357 Error Log 00:15:40.357 ========= 00:15:40.357 00:15:40.357 Arbitration 00:15:40.357 =========== 00:15:40.357 Arbitration Burst: 1 00:15:40.357 00:15:40.357 Power Management 00:15:40.357 ================ 00:15:40.357 Number of Power States: 1 00:15:40.357 Current Power State: Power State #0 00:15:40.357 Power State #0: 00:15:40.357 Max Power: 0.00 W 00:15:40.357 Non-Operational State: Operational 00:15:40.357 Entry Latency: Not Reported 00:15:40.357 Exit Latency: Not Reported 00:15:40.357 Relative Read Throughput: 0 00:15:40.357 Relative Read Latency: 0 00:15:40.357 Relative Write Throughput: 0 00:15:40.357 Relative Write Latency: 0 00:15:40.357 Idle Power: Not Reported 00:15:40.357 Active Power: Not Reported 00:15:40.357 Non-Operational Permissive Mode: Not Supported 00:15:40.357 00:15:40.357 Health Information 00:15:40.357 ================== 00:15:40.357 Critical Warnings: 00:15:40.357 Available Spare Space: OK 00:15:40.357 Temperature: OK 00:15:40.357 Device Reliability: OK 00:15:40.357 Read Only: No 00:15:40.357 Volatile Memory Backup: OK 00:15:40.357 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:40.357 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:40.357 Available Spare: 0% 00:15:40.357 Available Spare Threshold: 0% 00:15:40.357 Life Percentage Used:[2024-11-08 07:42:58.205971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1b1c0) on tqpair=0x19b6750 00:15:40.357 [2024-11-08 07:42:58.206071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.357 [2024-11-08 07:42:58.206076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19b6750) 00:15:40.357 [2024-11-08 07:42:58.206083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.357 [2024-11-08 07:42:58.206099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1b1c0, cid 7, qid 0 00:15:40.357 [2024-11-08 07:42:58.206142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.357 [2024-11-08 07:42:58.206148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.357 [2024-11-08 07:42:58.206152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.357 [2024-11-08 07:42:58.206156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1b1c0) on tqpair=0x19b6750 00:15:40.357 [2024-11-08 07:42:58.206187] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:15:40.357 [2024-11-08 07:42:58.206197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a740) on tqpair=0x19b6750 00:15:40.357 [2024-11-08 07:42:58.206203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.357 [2024-11-08 07:42:58.206209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1a8c0) on tqpair=0x19b6750 00:15:40.357 [2024-11-08 07:42:58.206213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.357 [2024-11-08 07:42:58.206219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1aa40) on tqpair=0x19b6750 00:15:40.357 [2024-11-08 07:42:58.206223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.357 [2024-11-08 07:42:58.206228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1abc0) on tqpair=0x19b6750 00:15:40.357 [2024-11-08 07:42:58.206233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.357 [2024-11-08 07:42:58.206241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.357 [2024-11-08 07:42:58.206245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.357 [2024-11-08 07:42:58.206249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6750) 00:15:40.357 [2024-11-08 07:42:58.206255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.357 [2024-11-08 07:42:58.206271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1abc0, cid 3, qid 0 00:15:40.357 [2024-11-08 07:42:58.206304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.357 [2024-11-08 07:42:58.206310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.357 [2024-11-08 07:42:58.206313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.357 [2024-11-08 07:42:58.206317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1abc0) on tqpair=0x19b6750 00:15:40.357 [2024-11-08 07:42:58.206324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.357 [2024-11-08 07:42:58.206328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.357 [2024-11-08 07:42:58.206332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6750) 00:15:40.357 [2024-11-08 07:42:58.206338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.357 [2024-11-08 07:42:58.206353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1abc0, cid 3, qid 0 00:15:40.357 [2024-11-08 07:42:58.206405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.357 [2024-11-08 07:42:58.206410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.357 [2024-11-08 07:42:58.206414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.357 [2024-11-08 07:42:58.206418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1abc0) on tqpair=0x19b6750 00:15:40.357 [2024-11-08 07:42:58.206423] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:15:40.357 [2024-11-08 07:42:58.206429] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:15:40.357 [2024-11-08 07:42:58.206437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.357 [2024-11-08 07:42:58.206442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.357 [2024-11-08 07:42:58.206445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6750) 00:15:40.358 [2024-11-08 07:42:58.206452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.358 [2024-11-08 07:42:58.206464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1abc0, cid 3, qid 0 00:15:40.358 [2024-11-08 07:42:58.206499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.358 [2024-11-08 07:42:58.206505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.358 [2024-11-08 07:42:58.206509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1abc0) on tqpair=0x19b6750 00:15:40.358 [2024-11-08 07:42:58.206521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6750) 00:15:40.358 [2024-11-08 07:42:58.206536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.358 [2024-11-08 07:42:58.206549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1abc0, cid 3, qid 0 00:15:40.358 [2024-11-08 07:42:58.206592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.358 [2024-11-08 07:42:58.206598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.358 [2024-11-08 07:42:58.206601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206605] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1abc0) on tqpair=0x19b6750 00:15:40.358 [2024-11-08 07:42:58.206614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6750) 00:15:40.358 [2024-11-08 07:42:58.206628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.358 [2024-11-08 07:42:58.206640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1abc0, cid 3, qid 0 00:15:40.358 [2024-11-08 07:42:58.206692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.358 [2024-11-08 07:42:58.206698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.358 [2024-11-08 07:42:58.206702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1abc0) on tqpair=0x19b6750 00:15:40.358 [2024-11-08 07:42:58.206714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6750) 00:15:40.358 [2024-11-08 07:42:58.206729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.358 [2024-11-08 07:42:58.206741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1abc0, cid 3, qid 0 00:15:40.358 [2024-11-08 07:42:58.206782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.358 [2024-11-08 07:42:58.206788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.358 [2024-11-08 07:42:58.206792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1abc0) on tqpair=0x19b6750 00:15:40.358 [2024-11-08 07:42:58.206804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6750) 00:15:40.358 [2024-11-08 07:42:58.206818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.358 [2024-11-08 07:42:58.206831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1abc0, cid 3, qid 0 00:15:40.358 [2024-11-08 07:42:58.206867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.358 [2024-11-08 07:42:58.206873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.358 [2024-11-08 07:42:58.206876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1abc0) on tqpair=0x19b6750 00:15:40.358 [2024-11-08 07:42:58.206889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6750) 00:15:40.358 [2024-11-08 07:42:58.206903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.358 [2024-11-08 07:42:58.206916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1abc0, cid 3, qid 0 00:15:40.358 [2024-11-08 07:42:58.206953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.358 [2024-11-08 07:42:58.206959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.358 [2024-11-08 07:42:58.206963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.206967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1abc0) on tqpair=0x19b6750 00:15:40.358 [2024-11-08 07:42:58.206975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.211000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.211006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19b6750) 00:15:40.358 [2024-11-08 07:42:58.211014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.358 [2024-11-08 07:42:58.211033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a1abc0, cid 3, qid 0 00:15:40.358 [2024-11-08 07:42:58.211071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.358 [2024-11-08 07:42:58.211077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.358 [2024-11-08 07:42:58.211081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.358 [2024-11-08 07:42:58.211085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a1abc0) on tqpair=0x19b6750 00:15:40.358 [2024-11-08 07:42:58.211093] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:15:40.358 0% 00:15:40.358 Data Units Read: 0 00:15:40.358 Data Units Written: 0 00:15:40.358 Host Read Commands: 0 00:15:40.358 Host Write Commands: 0 00:15:40.358 Controller Busy Time: 0 minutes 00:15:40.358 Power Cycles: 0 00:15:40.358 Power On Hours: 0 hours 00:15:40.358 Unsafe Shutdowns: 0 00:15:40.358 Unrecoverable Media Errors: 0 00:15:40.358 Lifetime Error Log Entries: 0 00:15:40.358 Warning Temperature Time: 0 minutes 00:15:40.358 Critical Temperature Time: 0 minutes 00:15:40.358 00:15:40.358 Number of Queues 00:15:40.358 ================ 00:15:40.358 Number of I/O Submission Queues: 127 00:15:40.358 Number of I/O Completion Queues: 127 00:15:40.358 00:15:40.358 Active Namespaces 00:15:40.358 ================= 00:15:40.358 Namespace ID:1 00:15:40.358 Error Recovery Timeout: Unlimited 00:15:40.358 Command Set Identifier: NVM (00h) 00:15:40.358 Deallocate: Supported 00:15:40.358 Deallocated/Unwritten Error: Not Supported 00:15:40.358 Deallocated Read Value: Unknown 00:15:40.358 Deallocate in Write Zeroes: Not Supported 00:15:40.358 Deallocated Guard Field: 0xFFFF 00:15:40.358 Flush: Supported 00:15:40.358 Reservation: Supported 00:15:40.358 Namespace Sharing Capabilities: Multiple Controllers 00:15:40.358 Size (in LBAs): 131072 (0GiB) 00:15:40.358 Capacity (in LBAs): 131072 (0GiB) 00:15:40.358 Utilization (in LBAs): 131072 (0GiB) 00:15:40.358 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:40.358 EUI64: ABCDEF0123456789 00:15:40.358 UUID: 2432149b-9a20-4ea9-aa84-d8fbf79715a5 00:15:40.358 Thin Provisioning: Not Supported 00:15:40.358 Per-NS Atomic Units: Yes 00:15:40.358 Atomic Boundary Size (Normal): 0 00:15:40.358 Atomic Boundary Size (PFail): 0 00:15:40.358 Atomic Boundary Offset: 0 00:15:40.358 Maximum Single Source Range Length: 65535 00:15:40.358 Maximum Copy Length: 65535 00:15:40.358 Maximum Source Range Count: 1 00:15:40.358 NGUID/EUI64 Never Reused: No 00:15:40.358 Namespace Write Protected: No 00:15:40.358 Number of LBA Formats: 1 00:15:40.358 Current LBA Format: LBA Format #00 00:15:40.358 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:40.358 00:15:40.358 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:40.358 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.358 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.358 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.358 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.358 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:40.358 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:40.358 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:40.358 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:15:40.358 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:40.358 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:15:40.358 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:40.358 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:40.617 rmmod nvme_tcp 00:15:40.617 rmmod nvme_fabrics 00:15:40.617 rmmod nvme_keyring 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73858 ']' 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73858 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 73858 ']' 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 73858 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73858 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:40.617 killing process with pid 73858 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73858' 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 73858 00:15:40.617 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 73858 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.876 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.134 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:15:41.134 00:15:41.134 real 0m2.891s 00:15:41.134 user 0m6.976s 00:15:41.134 sys 0m0.872s 00:15:41.134 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:41.134 07:42:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:41.134 ************************************ 00:15:41.134 END TEST nvmf_identify 00:15:41.134 ************************************ 00:15:41.134 07:42:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:41.134 07:42:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:41.134 07:42:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:41.134 07:42:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.134 ************************************ 00:15:41.134 START TEST nvmf_perf 00:15:41.134 ************************************ 00:15:41.134 07:42:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:41.134 * Looking for test storage... 00:15:41.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:41.134 07:42:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:41.134 07:42:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:41.134 07:42:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.134 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:15:41.393 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:15:41.393 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.393 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:15:41.393 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.393 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.393 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.393 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:15:41.393 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.393 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:41.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.393 --rc genhtml_branch_coverage=1 00:15:41.393 --rc genhtml_function_coverage=1 00:15:41.393 --rc genhtml_legend=1 00:15:41.393 --rc geninfo_all_blocks=1 00:15:41.393 --rc geninfo_unexecuted_blocks=1 00:15:41.393 00:15:41.393 ' 00:15:41.393 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:41.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.393 --rc genhtml_branch_coverage=1 00:15:41.393 --rc genhtml_function_coverage=1 00:15:41.393 --rc genhtml_legend=1 00:15:41.393 --rc geninfo_all_blocks=1 00:15:41.393 --rc geninfo_unexecuted_blocks=1 00:15:41.394 00:15:41.394 ' 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:41.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.394 --rc genhtml_branch_coverage=1 00:15:41.394 --rc genhtml_function_coverage=1 00:15:41.394 --rc genhtml_legend=1 00:15:41.394 --rc geninfo_all_blocks=1 00:15:41.394 --rc geninfo_unexecuted_blocks=1 00:15:41.394 00:15:41.394 ' 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:41.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.394 --rc genhtml_branch_coverage=1 00:15:41.394 --rc genhtml_function_coverage=1 00:15:41.394 --rc genhtml_legend=1 00:15:41.394 --rc geninfo_all_blocks=1 00:15:41.394 --rc geninfo_unexecuted_blocks=1 00:15:41.394 00:15:41.394 ' 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:41.394 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:41.394 Cannot find device "nvmf_init_br" 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:41.394 Cannot find device "nvmf_init_br2" 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:41.394 Cannot find device "nvmf_tgt_br" 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.394 Cannot find device "nvmf_tgt_br2" 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:41.394 Cannot find device "nvmf_init_br" 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:41.394 Cannot find device "nvmf_init_br2" 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:41.394 Cannot find device "nvmf_tgt_br" 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:41.394 Cannot find device "nvmf_tgt_br2" 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:41.394 Cannot find device "nvmf_br" 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:41.394 Cannot find device "nvmf_init_if" 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:41.394 Cannot find device "nvmf_init_if2" 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.394 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.653 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:41.654 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:41.654 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:15:41.654 00:15:41.654 --- 10.0.0.3 ping statistics --- 00:15:41.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.654 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:41.654 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:41.654 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:15:41.654 00:15:41.654 --- 10.0.0.4 ping statistics --- 00:15:41.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.654 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:41.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:15:41.654 00:15:41.654 --- 10.0.0.1 ping statistics --- 00:15:41.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.654 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:41.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:15:41.654 00:15:41.654 --- 10.0.0.2 ping statistics --- 00:15:41.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.654 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74123 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74123 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 74123 ']' 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:41.654 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:41.912 [2024-11-08 07:42:59.628892] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:41.913 [2024-11-08 07:42:59.629001] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.913 [2024-11-08 07:42:59.779284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:41.913 [2024-11-08 07:42:59.822403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.913 [2024-11-08 07:42:59.822469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.913 [2024-11-08 07:42:59.822479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.913 [2024-11-08 07:42:59.822487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.913 [2024-11-08 07:42:59.822494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.913 [2024-11-08 07:42:59.823428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.913 [2024-11-08 07:42:59.823527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.913 [2024-11-08 07:42:59.823581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.913 [2024-11-08 07:42:59.823583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:41.913 [2024-11-08 07:42:59.864646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:42.183 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:42.183 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:15:42.183 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:42.183 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:42.183 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:42.183 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.183 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:42.183 07:42:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:42.467 07:43:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:42.467 07:43:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:43.034 07:43:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:43.034 07:43:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:43.034 07:43:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:43.034 07:43:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:43.034 07:43:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:43.034 07:43:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:43.034 07:43:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:43.293 [2024-11-08 07:43:01.107453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.293 07:43:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:43.551 07:43:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:43.551 07:43:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:43.810 07:43:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:43.810 07:43:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:43.810 07:43:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:44.068 [2024-11-08 07:43:02.008631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:44.068 07:43:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:44.327 07:43:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:44.327 07:43:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:44.327 07:43:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:44.327 07:43:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:45.707 Initializing NVMe Controllers 00:15:45.707 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:45.707 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:45.707 Initialization complete. Launching workers. 00:15:45.707 ======================================================== 00:15:45.707 Latency(us) 00:15:45.707 Device Information : IOPS MiB/s Average min max 00:15:45.707 PCIE (0000:00:10.0) NSID 1 from core 0: 25888.00 101.12 1235.80 342.01 7498.29 00:15:45.707 ======================================================== 00:15:45.707 Total : 25888.00 101.12 1235.80 342.01 7498.29 00:15:45.707 00:15:45.707 07:43:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:47.086 Initializing NVMe Controllers 00:15:47.086 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:47.086 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:47.086 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:47.086 Initialization complete. Launching workers. 00:15:47.086 ======================================================== 00:15:47.086 Latency(us) 00:15:47.086 Device Information : IOPS MiB/s Average min max 00:15:47.086 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4958.24 19.37 200.65 75.24 4171.03 00:15:47.086 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.51 0.48 8096.47 7964.53 12000.45 00:15:47.086 ======================================================== 00:15:47.086 Total : 5081.75 19.85 392.55 75.24 12000.45 00:15:47.086 00:15:47.086 07:43:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:48.462 Initializing NVMe Controllers 00:15:48.462 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:48.462 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:48.462 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:48.462 Initialization complete. Launching workers. 00:15:48.462 ======================================================== 00:15:48.462 Latency(us) 00:15:48.462 Device Information : IOPS MiB/s Average min max 00:15:48.462 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10953.20 42.79 2922.03 420.22 6411.85 00:15:48.462 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4004.89 15.64 8030.27 6996.75 12526.52 00:15:48.462 ======================================================== 00:15:48.462 Total : 14958.08 58.43 4289.71 420.22 12526.52 00:15:48.462 00:15:48.462 07:43:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:48.462 07:43:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:50.996 Initializing NVMe Controllers 00:15:50.996 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:50.996 Controller IO queue size 128, less than required. 00:15:50.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:50.996 Controller IO queue size 128, less than required. 00:15:50.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:50.996 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:50.996 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:50.996 Initialization complete. Launching workers. 00:15:50.996 ======================================================== 00:15:50.996 Latency(us) 00:15:50.996 Device Information : IOPS MiB/s Average min max 00:15:50.996 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2197.54 549.38 58714.23 35135.28 106286.15 00:15:50.996 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 685.23 171.31 197089.63 46056.59 323591.93 00:15:50.996 ======================================================== 00:15:50.996 Total : 2882.77 720.69 91605.96 35135.28 323591.93 00:15:50.996 00:15:50.996 07:43:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:51.255 Initializing NVMe Controllers 00:15:51.255 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:51.255 Controller IO queue size 128, less than required. 00:15:51.255 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.255 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:51.255 Controller IO queue size 128, less than required. 00:15:51.255 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.255 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:51.255 WARNING: Some requested NVMe devices were skipped 00:15:51.255 No valid NVMe controllers or AIO or URING devices found 00:15:51.255 07:43:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:53.795 Initializing NVMe Controllers 00:15:53.795 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:53.795 Controller IO queue size 128, less than required. 00:15:53.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:53.795 Controller IO queue size 128, less than required. 00:15:53.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:53.795 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:53.795 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:53.795 Initialization complete. Launching workers. 00:15:53.795 00:15:53.795 ==================== 00:15:53.795 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:53.795 TCP transport: 00:15:53.795 polls: 13018 00:15:53.795 idle_polls: 8332 00:15:53.795 sock_completions: 4686 00:15:53.795 nvme_completions: 8111 00:15:53.795 submitted_requests: 12224 00:15:53.795 queued_requests: 1 00:15:53.795 00:15:53.795 ==================== 00:15:53.795 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:53.795 TCP transport: 00:15:53.795 polls: 13161 00:15:53.795 idle_polls: 7881 00:15:53.795 sock_completions: 5280 00:15:53.795 nvme_completions: 7971 00:15:53.795 submitted_requests: 11948 00:15:53.795 queued_requests: 1 00:15:53.795 ======================================================== 00:15:53.795 Latency(us) 00:15:53.795 Device Information : IOPS MiB/s Average min max 00:15:53.795 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2025.40 506.35 63960.32 34666.44 98446.47 00:15:53.795 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1990.44 497.61 64570.78 23355.27 102341.50 00:15:53.795 ======================================================== 00:15:53.795 Total : 4015.85 1003.96 64262.89 23355.27 102341.50 00:15:53.795 00:15:53.795 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:53.795 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:54.054 rmmod nvme_tcp 00:15:54.054 rmmod nvme_fabrics 00:15:54.054 rmmod nvme_keyring 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74123 ']' 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74123 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 74123 ']' 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 74123 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74123 00:15:54.054 killing process with pid 74123 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74123' 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 74123 00:15:54.054 07:43:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 74123 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:54.991 00:15:54.991 real 0m14.037s 00:15:54.991 user 0m50.086s 00:15:54.991 sys 0m4.458s 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:54.991 07:43:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:54.991 ************************************ 00:15:54.991 END TEST nvmf_perf 00:15:54.991 ************************************ 00:15:55.250 07:43:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:55.250 07:43:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:55.250 07:43:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:55.250 07:43:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.250 ************************************ 00:15:55.250 START TEST nvmf_fio_host 00:15:55.250 ************************************ 00:15:55.250 07:43:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:55.250 * Looking for test storage... 00:15:55.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:55.250 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:55.250 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:15:55.250 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:55.250 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:55.250 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:55.250 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:55.250 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:55.250 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:55.250 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:55.250 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:55.250 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:55.250 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:55.250 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:55.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.251 --rc genhtml_branch_coverage=1 00:15:55.251 --rc genhtml_function_coverage=1 00:15:55.251 --rc genhtml_legend=1 00:15:55.251 --rc geninfo_all_blocks=1 00:15:55.251 --rc geninfo_unexecuted_blocks=1 00:15:55.251 00:15:55.251 ' 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:55.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.251 --rc genhtml_branch_coverage=1 00:15:55.251 --rc genhtml_function_coverage=1 00:15:55.251 --rc genhtml_legend=1 00:15:55.251 --rc geninfo_all_blocks=1 00:15:55.251 --rc geninfo_unexecuted_blocks=1 00:15:55.251 00:15:55.251 ' 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:55.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.251 --rc genhtml_branch_coverage=1 00:15:55.251 --rc genhtml_function_coverage=1 00:15:55.251 --rc genhtml_legend=1 00:15:55.251 --rc geninfo_all_blocks=1 00:15:55.251 --rc geninfo_unexecuted_blocks=1 00:15:55.251 00:15:55.251 ' 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:55.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.251 --rc genhtml_branch_coverage=1 00:15:55.251 --rc genhtml_function_coverage=1 00:15:55.251 --rc genhtml_legend=1 00:15:55.251 --rc geninfo_all_blocks=1 00:15:55.251 --rc geninfo_unexecuted_blocks=1 00:15:55.251 00:15:55.251 ' 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.251 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:55.511 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:55.511 Cannot find device "nvmf_init_br" 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:55.511 Cannot find device "nvmf_init_br2" 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:55.511 Cannot find device "nvmf_tgt_br" 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.511 Cannot find device "nvmf_tgt_br2" 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:55.511 Cannot find device "nvmf_init_br" 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:55.511 Cannot find device "nvmf_init_br2" 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:55.511 Cannot find device "nvmf_tgt_br" 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:55.511 Cannot find device "nvmf_tgt_br2" 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:55.511 Cannot find device "nvmf_br" 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:55.511 Cannot find device "nvmf_init_if" 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:55.511 Cannot find device "nvmf_init_if2" 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:55.511 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.512 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.512 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:55.512 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.512 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.512 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:55.512 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:55.512 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:55.512 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:55.512 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:55.512 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:55.512 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:55.512 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:55.512 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:55.771 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:55.771 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:15:55.771 00:15:55.771 --- 10.0.0.3 ping statistics --- 00:15:55.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.771 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:55.771 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:55.771 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:15:55.771 00:15:55.771 --- 10.0.0.4 ping statistics --- 00:15:55.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.771 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:55.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:55.771 00:15:55.771 --- 10.0.0.1 ping statistics --- 00:15:55.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.771 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:55.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:15:55.771 00:15:55.771 --- 10.0.0.2 ping statistics --- 00:15:55.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.771 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:55.771 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:55.772 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:55.772 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.772 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74576 00:15:55.772 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:55.772 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:55.772 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74576 00:15:55.772 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 74576 ']' 00:15:55.772 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.772 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:55.772 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.772 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:55.772 07:43:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:55.772 [2024-11-08 07:43:13.728631] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:15:55.772 [2024-11-08 07:43:13.728952] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.030 [2024-11-08 07:43:13.880479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.031 [2024-11-08 07:43:13.931390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.031 [2024-11-08 07:43:13.931431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.031 [2024-11-08 07:43:13.931441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.031 [2024-11-08 07:43:13.931450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.031 [2024-11-08 07:43:13.931457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.031 [2024-11-08 07:43:13.932340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.031 [2024-11-08 07:43:13.936019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.031 [2024-11-08 07:43:13.936131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.031 [2024-11-08 07:43:13.936134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.031 [2024-11-08 07:43:13.978102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:56.289 07:43:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:56.289 07:43:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:15:56.290 07:43:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:56.548 [2024-11-08 07:43:14.311457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.548 07:43:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:56.548 07:43:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:56.548 07:43:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.549 07:43:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:56.807 Malloc1 00:15:56.807 07:43:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:57.091 07:43:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.360 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:57.360 [2024-11-08 07:43:15.242158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:57.360 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:57.629 07:43:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:57.889 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:57.889 fio-3.35 00:15:57.889 Starting 1 thread 00:16:00.424 00:16:00.424 test: (groupid=0, jobs=1): err= 0: pid=74647: Fri Nov 8 07:43:18 2024 00:16:00.424 read: IOPS=11.2k, BW=43.8MiB/s (45.9MB/s)(87.7MiB/2005msec) 00:16:00.424 slat (nsec): min=1587, max=348573, avg=1992.15, stdev=3366.31 00:16:00.424 clat (usec): min=2821, max=10540, avg=5963.51, stdev=455.42 00:16:00.424 lat (usec): min=2866, max=10542, avg=5965.50, stdev=455.41 00:16:00.424 clat percentiles (usec): 00:16:00.424 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5669], 00:16:00.424 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:16:00.424 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6652], 00:16:00.425 | 99.00th=[ 7046], 99.50th=[ 7635], 99.90th=[ 9503], 99.95th=[ 9765], 00:16:00.425 | 99.99th=[10421] 00:16:00.425 bw ( KiB/s): min=43688, max=45632, per=99.85%, avg=44746.00, stdev=812.63, samples=4 00:16:00.425 iops : min=10924, max=11408, avg=11187.00, stdev=202.29, samples=4 00:16:00.425 write: IOPS=11.2k, BW=43.6MiB/s (45.7MB/s)(87.4MiB/2005msec); 0 zone resets 00:16:00.425 slat (nsec): min=1631, max=321802, avg=2031.77, stdev=2363.63 00:16:00.425 clat (usec): min=2683, max=9929, avg=5437.91, stdev=422.19 00:16:00.425 lat (usec): min=2697, max=9930, avg=5439.94, stdev=422.33 00:16:00.425 clat percentiles (usec): 00:16:00.425 | 1.00th=[ 4555], 5.00th=[ 4883], 10.00th=[ 5014], 20.00th=[ 5145], 00:16:00.425 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5473], 00:16:00.425 | 70.00th=[ 5604], 80.00th=[ 5735], 90.00th=[ 5866], 95.00th=[ 5997], 00:16:00.425 | 99.00th=[ 6521], 99.50th=[ 7177], 99.90th=[ 9110], 99.95th=[ 9241], 00:16:00.425 | 99.99th=[ 9896] 00:16:00.425 bw ( KiB/s): min=43952, max=45400, per=99.98%, avg=44604.00, stdev=669.79, samples=4 00:16:00.425 iops : min=10988, max=11350, avg=11151.00, stdev=167.45, samples=4 00:16:00.425 lat (msec) : 4=0.06%, 10=99.92%, 20=0.02% 00:16:00.425 cpu : usr=69.31%, sys=23.80%, ctx=9, majf=0, minf=7 00:16:00.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:00.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:00.425 issued rwts: total=22463,22362,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.425 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:00.425 00:16:00.425 Run status group 0 (all jobs): 00:16:00.425 READ: bw=43.8MiB/s (45.9MB/s), 43.8MiB/s-43.8MiB/s (45.9MB/s-45.9MB/s), io=87.7MiB (92.0MB), run=2005-2005msec 00:16:00.425 WRITE: bw=43.6MiB/s (45.7MB/s), 43.6MiB/s-43.6MiB/s (45.7MB/s-45.7MB/s), io=87.4MiB (91.6MB), run=2005-2005msec 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:00.425 07:43:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:00.425 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:00.425 fio-3.35 00:16:00.425 Starting 1 thread 00:16:02.955 00:16:02.955 test: (groupid=0, jobs=1): err= 0: pid=74694: Fri Nov 8 07:43:20 2024 00:16:02.955 read: IOPS=8357, BW=131MiB/s (137MB/s)(262MiB/2010msec) 00:16:02.955 slat (nsec): min=2530, max=98116, avg=2980.95, stdev=1455.93 00:16:02.955 clat (usec): min=1857, max=23399, avg=8789.51, stdev=3137.14 00:16:02.955 lat (usec): min=1859, max=23404, avg=8792.49, stdev=3137.32 00:16:02.955 clat percentiles (usec): 00:16:02.955 | 1.00th=[ 3556], 5.00th=[ 4424], 10.00th=[ 5145], 20.00th=[ 6259], 00:16:02.955 | 30.00th=[ 7046], 40.00th=[ 7767], 50.00th=[ 8291], 60.00th=[ 8979], 00:16:02.955 | 70.00th=[ 9765], 80.00th=[11076], 90.00th=[12780], 95.00th=[14353], 00:16:02.955 | 99.00th=[19530], 99.50th=[20579], 99.90th=[22414], 99.95th=[22938], 00:16:02.955 | 99.99th=[23200] 00:16:02.955 bw ( KiB/s): min=57664, max=75808, per=51.81%, avg=69288.00, stdev=7973.43, samples=4 00:16:02.955 iops : min= 3604, max= 4738, avg=4330.50, stdev=498.34, samples=4 00:16:02.955 write: IOPS=4781, BW=74.7MiB/s (78.3MB/s)(141MiB/1889msec); 0 zone resets 00:16:02.955 slat (usec): min=29, max=220, avg=32.79, stdev= 7.37 00:16:02.955 clat (usec): min=5615, max=29839, avg=11466.74, stdev=3300.70 00:16:02.955 lat (usec): min=5645, max=29887, avg=11499.53, stdev=3303.85 00:16:02.955 clat percentiles (usec): 00:16:02.955 | 1.00th=[ 6390], 5.00th=[ 7308], 10.00th=[ 7963], 20.00th=[ 8848], 00:16:02.955 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10814], 60.00th=[11600], 00:16:02.955 | 70.00th=[12387], 80.00th=[13829], 90.00th=[15664], 95.00th=[17433], 00:16:02.955 | 99.00th=[22676], 99.50th=[25297], 99.90th=[28705], 99.95th=[28967], 00:16:02.955 | 99.99th=[29754] 00:16:02.955 bw ( KiB/s): min=60416, max=77856, per=94.13%, avg=72016.00, stdev=8005.48, samples=4 00:16:02.955 iops : min= 3776, max= 4866, avg=4501.00, stdev=500.34, samples=4 00:16:02.955 lat (msec) : 2=0.02%, 4=1.80%, 10=57.54%, 20=39.40%, 50=1.25% 00:16:02.955 cpu : usr=81.53%, sys=15.18%, ctx=151, majf=0, minf=4 00:16:02.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:02.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.955 issued rwts: total=16799,9033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.955 00:16:02.955 Run status group 0 (all jobs): 00:16:02.955 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=262MiB (275MB), run=2010-2010msec 00:16:02.955 WRITE: bw=74.7MiB/s (78.3MB/s), 74.7MiB/s-74.7MiB/s (78.3MB/s-78.3MB/s), io=141MiB (148MB), run=1889-1889msec 00:16:02.955 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:02.955 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:02.955 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:02.955 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:02.955 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:02.955 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:02.955 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:16:02.955 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:02.955 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:16:02.955 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:02.955 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:02.955 rmmod nvme_tcp 00:16:02.955 rmmod nvme_fabrics 00:16:03.213 rmmod nvme_keyring 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74576 ']' 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74576 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 74576 ']' 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 74576 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74576 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:03.213 killing process with pid 74576 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74576' 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 74576 00:16:03.213 07:43:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 74576 00:16:03.213 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:03.213 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:03.213 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:03.213 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:16:03.213 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:16:03.213 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:03.213 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:16:03.471 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:03.471 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:03.471 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:03.471 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:03.471 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:03.471 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:03.471 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:03.471 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:03.471 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:03.471 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:03.471 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:03.471 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:03.472 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:03.472 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:03.472 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:03.472 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:03.472 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.472 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.472 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.472 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:16:03.472 ************************************ 00:16:03.472 END TEST nvmf_fio_host 00:16:03.472 ************************************ 00:16:03.472 00:16:03.472 real 0m8.434s 00:16:03.472 user 0m33.008s 00:16:03.472 sys 0m2.610s 00:16:03.472 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:03.472 07:43:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.730 ************************************ 00:16:03.730 START TEST nvmf_failover 00:16:03.730 ************************************ 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:03.730 * Looking for test storage... 00:16:03.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:03.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.730 --rc genhtml_branch_coverage=1 00:16:03.730 --rc genhtml_function_coverage=1 00:16:03.730 --rc genhtml_legend=1 00:16:03.730 --rc geninfo_all_blocks=1 00:16:03.730 --rc geninfo_unexecuted_blocks=1 00:16:03.730 00:16:03.730 ' 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:03.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.730 --rc genhtml_branch_coverage=1 00:16:03.730 --rc genhtml_function_coverage=1 00:16:03.730 --rc genhtml_legend=1 00:16:03.730 --rc geninfo_all_blocks=1 00:16:03.730 --rc geninfo_unexecuted_blocks=1 00:16:03.730 00:16:03.730 ' 00:16:03.730 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:03.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.730 --rc genhtml_branch_coverage=1 00:16:03.730 --rc genhtml_function_coverage=1 00:16:03.731 --rc genhtml_legend=1 00:16:03.731 --rc geninfo_all_blocks=1 00:16:03.731 --rc geninfo_unexecuted_blocks=1 00:16:03.731 00:16:03.731 ' 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:03.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.731 --rc genhtml_branch_coverage=1 00:16:03.731 --rc genhtml_function_coverage=1 00:16:03.731 --rc genhtml_legend=1 00:16:03.731 --rc geninfo_all_blocks=1 00:16:03.731 --rc geninfo_unexecuted_blocks=1 00:16:03.731 00:16:03.731 ' 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.731 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:03.990 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:03.990 Cannot find device "nvmf_init_br" 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:03.990 Cannot find device "nvmf_init_br2" 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:03.990 Cannot find device "nvmf_tgt_br" 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:03.990 Cannot find device "nvmf_tgt_br2" 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:03.990 Cannot find device "nvmf_init_br" 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:16:03.990 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:03.990 Cannot find device "nvmf_init_br2" 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:03.991 Cannot find device "nvmf_tgt_br" 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:03.991 Cannot find device "nvmf_tgt_br2" 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:03.991 Cannot find device "nvmf_br" 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:03.991 Cannot find device "nvmf_init_if" 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:03.991 Cannot find device "nvmf_init_if2" 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:03.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:03.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:03.991 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:04.250 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:04.250 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:04.250 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:04.250 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:04.250 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:04.250 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:04.250 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:04.250 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:04.250 07:43:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:04.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:04.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:04.250 00:16:04.250 --- 10.0.0.3 ping statistics --- 00:16:04.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.250 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:04.250 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:04.250 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:16:04.250 00:16:04.250 --- 10.0.0.4 ping statistics --- 00:16:04.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.250 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:04.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:04.250 00:16:04.250 --- 10.0.0.1 ping statistics --- 00:16:04.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.250 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:04.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:16:04.250 00:16:04.250 --- 10.0.0.2 ping statistics --- 00:16:04.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.250 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=74971 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 74971 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 74971 ']' 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:04.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:04.250 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:04.509 [2024-11-08 07:43:22.214185] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:16:04.509 [2024-11-08 07:43:22.214276] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.509 [2024-11-08 07:43:22.363314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:04.509 [2024-11-08 07:43:22.406350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.509 [2024-11-08 07:43:22.406548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.509 [2024-11-08 07:43:22.406603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.509 [2024-11-08 07:43:22.406647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.509 [2024-11-08 07:43:22.406711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.509 [2024-11-08 07:43:22.407637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.509 [2024-11-08 07:43:22.407744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.509 [2024-11-08 07:43:22.407746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.509 [2024-11-08 07:43:22.448953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:04.768 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:04.768 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:16:04.768 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:04.768 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:04.768 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:04.768 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.768 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:05.026 [2024-11-08 07:43:22.825270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.026 07:43:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:05.284 Malloc0 00:16:05.284 07:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:05.542 07:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:05.800 07:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:06.057 [2024-11-08 07:43:23.816849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:06.057 07:43:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:06.057 [2024-11-08 07:43:24.004973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:06.315 07:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:06.315 [2024-11-08 07:43:24.205164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:06.315 07:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75021 00:16:06.315 07:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:06.315 07:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:06.315 07:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75021 /var/tmp/bdevperf.sock 00:16:06.315 07:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75021 ']' 00:16:06.315 07:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:06.315 07:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:06.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:06.315 07:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:06.315 07:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:06.315 07:43:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:07.248 07:43:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:07.248 07:43:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:16:07.248 07:43:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:07.507 NVMe0n1 00:16:07.507 07:43:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:07.765 00:16:07.765 07:43:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75039 00:16:07.765 07:43:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:07.765 07:43:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:08.700 07:43:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:08.962 [2024-11-08 07:43:26.902928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.902987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.902997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.962 [2024-11-08 07:43:26.903482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:08.963 [2024-11-08 07:43:26.903864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228dcf0 is same with the state(6) to be set 00:16:09.250 07:43:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:12.582 07:43:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:12.582 00:16:12.582 07:43:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:12.840 07:43:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:16.130 07:43:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:16.130 [2024-11-08 07:43:33.873393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:16.130 07:43:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:17.068 07:43:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:17.327 07:43:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75039 00:16:23.906 { 00:16:23.906 "results": [ 00:16:23.906 { 00:16:23.906 "job": "NVMe0n1", 00:16:23.906 "core_mask": "0x1", 00:16:23.906 "workload": "verify", 00:16:23.906 "status": "finished", 00:16:23.906 "verify_range": { 00:16:23.906 "start": 0, 00:16:23.906 "length": 16384 00:16:23.906 }, 00:16:23.906 "queue_depth": 128, 00:16:23.906 "io_size": 4096, 00:16:23.906 "runtime": 15.00917, 00:16:23.906 "iops": 11170.3045538161, 00:16:23.906 "mibps": 43.63400216334414, 00:16:23.906 "io_failed": 3909, 00:16:23.906 "io_timeout": 0, 00:16:23.906 "avg_latency_us": 11174.952944811466, 00:16:23.906 "min_latency_us": 446.6590476190476, 00:16:23.906 "max_latency_us": 25340.586666666666 00:16:23.906 } 00:16:23.906 ], 00:16:23.906 "core_count": 1 00:16:23.906 } 00:16:23.906 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75021 00:16:23.906 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75021 ']' 00:16:23.906 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75021 00:16:23.906 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:16:23.906 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:23.906 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75021 00:16:23.906 killing process with pid 75021 00:16:23.906 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:23.906 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:23.906 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75021' 00:16:23.906 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75021 00:16:23.906 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75021 00:16:23.906 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:23.906 [2024-11-08 07:43:24.261990] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:16:23.906 [2024-11-08 07:43:24.262070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75021 ] 00:16:23.906 [2024-11-08 07:43:24.402149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.906 [2024-11-08 07:43:24.444880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.906 [2024-11-08 07:43:24.485518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:23.906 Running I/O for 15 seconds... 00:16:23.906 9109.00 IOPS, 35.58 MiB/s [2024-11-08T07:43:41.867Z] [2024-11-08 07:43:26.903913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.903956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.906 [2024-11-08 07:43:26.903989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.904004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.906 [2024-11-08 07:43:26.904019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.904032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.906 [2024-11-08 07:43:26.904047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.904060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.906 [2024-11-08 07:43:26.904075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.904088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.906 [2024-11-08 07:43:26.904102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.904115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.906 [2024-11-08 07:43:26.904130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.904143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.906 [2024-11-08 07:43:26.904157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.904170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.906 [2024-11-08 07:43:26.904185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.904198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.906 [2024-11-08 07:43:26.904212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.904225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.906 [2024-11-08 07:43:26.904239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.904277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.906 [2024-11-08 07:43:26.904292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.904306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.906 [2024-11-08 07:43:26.904320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.904334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.906 [2024-11-08 07:43:26.904348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.904361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.906 [2024-11-08 07:43:26.904376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.906 [2024-11-08 07:43:26.904389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.904951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.904965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.907 [2024-11-08 07:43:26.905434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.907 [2024-11-08 07:43:26.905447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.905975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.905996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.908 [2024-11-08 07:43:26.906530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.908 [2024-11-08 07:43:26.906544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.906969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.906988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.907016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.907043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.907071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.907098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.907125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.909 [2024-11-08 07:43:26.907550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.909 [2024-11-08 07:43:26.907578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.909 [2024-11-08 07:43:26.907592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb32100 is same with the state(6) to be set 00:16:23.910 [2024-11-08 07:43:26.907608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.910 [2024-11-08 07:43:26.907618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.910 [2024-11-08 07:43:26.907628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84136 len:8 PRP1 0x0 PRP2 0x0 00:16:23.910 [2024-11-08 07:43:26.907642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:26.907696] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:23.910 [2024-11-08 07:43:26.907745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.910 [2024-11-08 07:43:26.907760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:26.907774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.910 [2024-11-08 07:43:26.907787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:26.907801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.910 [2024-11-08 07:43:26.907814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:26.907827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.910 [2024-11-08 07:43:26.907840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:26.907854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:23.910 [2024-11-08 07:43:26.907899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa95710 (9): Bad file descriptor 00:16:23.910 [2024-11-08 07:43:26.910697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:23.910 [2024-11-08 07:43:26.932385] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:16:23.910 9959.00 IOPS, 38.90 MiB/s [2024-11-08T07:43:41.871Z] 10530.00 IOPS, 41.13 MiB/s [2024-11-08T07:43:41.871Z] 10829.25 IOPS, 42.30 MiB/s [2024-11-08T07:43:41.871Z] [2024-11-08 07:43:30.547952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.910 [2024-11-08 07:43:30.548041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.910 [2024-11-08 07:43:30.548104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.910 [2024-11-08 07:43:30.548132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.910 [2024-11-08 07:43:30.548160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.910 [2024-11-08 07:43:30.548188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.910 [2024-11-08 07:43:30.548224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.910 [2024-11-08 07:43:30.548252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.910 [2024-11-08 07:43:30.548280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.910 [2024-11-08 07:43:30.548817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.910 [2024-11-08 07:43:30.548831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.548845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.548859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.548872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.548886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.548900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.548914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.548928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.548942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.548955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.548969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.911 [2024-11-08 07:43:30.548991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.911 [2024-11-08 07:43:30.549020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.911 [2024-11-08 07:43:30.549047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.911 [2024-11-08 07:43:30.549075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.911 [2024-11-08 07:43:30.549103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.911 [2024-11-08 07:43:30.549130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.911 [2024-11-08 07:43:30.549159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.911 [2024-11-08 07:43:30.549191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.911 [2024-11-08 07:43:30.549639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.911 [2024-11-08 07:43:30.549666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.911 [2024-11-08 07:43:30.549694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.911 [2024-11-08 07:43:30.549722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.911 [2024-11-08 07:43:30.549749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.911 [2024-11-08 07:43:30.549763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.911 [2024-11-08 07:43:30.549776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.549791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.549804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.549818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.549832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.549846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.549859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.549878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.549892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.549907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.549920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.549935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.549947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.549962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.549975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.549997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.912 [2024-11-08 07:43:30.550234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.912 [2024-11-08 07:43:30.550262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.912 [2024-11-08 07:43:30.550290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.912 [2024-11-08 07:43:30.550318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.912 [2024-11-08 07:43:30.550347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.912 [2024-11-08 07:43:30.550375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.912 [2024-11-08 07:43:30.550402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.912 [2024-11-08 07:43:30.550430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.912 [2024-11-08 07:43:30.550890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.912 [2024-11-08 07:43:30.550905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.913 [2024-11-08 07:43:30.550918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.550932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.913 [2024-11-08 07:43:30.550945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.550965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.550986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.551015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.551043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.551071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.551098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.551126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.551154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.551182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.551211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.551238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.551266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.551294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.551326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.551354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.913 [2024-11-08 07:43:30.551381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fb70 is same with the state(6) to be set 00:16:23.913 [2024-11-08 07:43:30.551412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.913 [2024-11-08 07:43:30.551422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.913 [2024-11-08 07:43:30.551432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37304 len:8 PRP1 0x0 PRP2 0x0 00:16:23.913 [2024-11-08 07:43:30.551445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.913 [2024-11-08 07:43:30.551468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.913 [2024-11-08 07:43:30.551479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37744 len:8 PRP1 0x0 PRP2 0x0 00:16:23.913 [2024-11-08 07:43:30.551491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.913 [2024-11-08 07:43:30.551514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.913 [2024-11-08 07:43:30.551524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37752 len:8 PRP1 0x0 PRP2 0x0 00:16:23.913 [2024-11-08 07:43:30.551536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.913 [2024-11-08 07:43:30.551559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.913 [2024-11-08 07:43:30.551568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37760 len:8 PRP1 0x0 PRP2 0x0 00:16:23.913 [2024-11-08 07:43:30.551581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.913 [2024-11-08 07:43:30.551603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.913 [2024-11-08 07:43:30.551614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37768 len:8 PRP1 0x0 PRP2 0x0 00:16:23.913 [2024-11-08 07:43:30.551627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.913 [2024-11-08 07:43:30.551649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.913 [2024-11-08 07:43:30.551659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37776 len:8 PRP1 0x0 PRP2 0x0 00:16:23.913 [2024-11-08 07:43:30.551672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.913 [2024-11-08 07:43:30.551700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.913 [2024-11-08 07:43:30.551709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37784 len:8 PRP1 0x0 PRP2 0x0 00:16:23.913 [2024-11-08 07:43:30.551722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.913 [2024-11-08 07:43:30.551744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.913 [2024-11-08 07:43:30.551754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37792 len:8 PRP1 0x0 PRP2 0x0 00:16:23.913 [2024-11-08 07:43:30.551767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.913 [2024-11-08 07:43:30.551789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.913 [2024-11-08 07:43:30.551799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37800 len:8 PRP1 0x0 PRP2 0x0 00:16:23.913 [2024-11-08 07:43:30.551812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.913 [2024-11-08 07:43:30.551834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.913 [2024-11-08 07:43:30.551845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37808 len:8 PRP1 0x0 PRP2 0x0 00:16:23.913 [2024-11-08 07:43:30.551857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.913 [2024-11-08 07:43:30.551880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.913 [2024-11-08 07:43:30.551889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37816 len:8 PRP1 0x0 PRP2 0x0 00:16:23.913 [2024-11-08 07:43:30.551902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.551953] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:16:23.913 [2024-11-08 07:43:30.552011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.913 [2024-11-08 07:43:30.552027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.552041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.913 [2024-11-08 07:43:30.552054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.913 [2024-11-08 07:43:30.552067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.914 [2024-11-08 07:43:30.552081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:30.552094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.914 [2024-11-08 07:43:30.552114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:30.552127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:16:23.914 [2024-11-08 07:43:30.552156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa95710 (9): Bad file descriptor 00:16:23.914 [2024-11-08 07:43:30.554929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:16:23.914 [2024-11-08 07:43:30.584599] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:16:23.914 10890.40 IOPS, 42.54 MiB/s [2024-11-08T07:43:41.875Z] 11018.67 IOPS, 43.04 MiB/s [2024-11-08T07:43:41.875Z] 11113.14 IOPS, 43.41 MiB/s [2024-11-08T07:43:41.875Z] 11195.00 IOPS, 43.73 MiB/s [2024-11-08T07:43:41.875Z] 11253.33 IOPS, 43.96 MiB/s [2024-11-08T07:43:41.875Z] [2024-11-08 07:43:35.148768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.148829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.148852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.148867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.148882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.148896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.148911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.148924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.148939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.148952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.148967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.148991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.149019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.149047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.149075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.149103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.149148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.149176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.149204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.149231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.149259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.149287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.149314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.149343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.149371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.149399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.149426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.149453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.149486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.149514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.149541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.149569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.149596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.149623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.149650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.149678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.149705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.914 [2024-11-08 07:43:35.149733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.914 [2024-11-08 07:43:35.149760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.914 [2024-11-08 07:43:35.149775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.149789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.149803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.149816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.149831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.149849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.149864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.149877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.149891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.149906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.149920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.149934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.149948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.149961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.149984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.149998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.150474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.150501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.150529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.150557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.150589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.150617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.150644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.915 [2024-11-08 07:43:35.150679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.915 [2024-11-08 07:43:35.150791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.915 [2024-11-08 07:43:35.150805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.150818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.150832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.150846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.150860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.150874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.150888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.150901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.150915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.150928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.150947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.150961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.150975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.150997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.151025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.151053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.151080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.151108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.151136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.151164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.151192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.151219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:23.916 [2024-11-08 07:43:35.151246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.916 [2024-11-08 07:43:35.151729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6f830 is same with the state(6) to be set 00:16:23.916 [2024-11-08 07:43:35.151759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.916 [2024-11-08 07:43:35.151768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.916 [2024-11-08 07:43:35.151778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84216 len:8 PRP1 0x0 PRP2 0x0 00:16:23.916 [2024-11-08 07:43:35.151792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.916 [2024-11-08 07:43:35.151815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.916 [2024-11-08 07:43:35.151825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84224 len:8 PRP1 0x0 PRP2 0x0 00:16:23.916 [2024-11-08 07:43:35.151838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.916 [2024-11-08 07:43:35.151861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.916 [2024-11-08 07:43:35.151871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84232 len:8 PRP1 0x0 PRP2 0x0 00:16:23.916 [2024-11-08 07:43:35.151884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.916 [2024-11-08 07:43:35.151906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.916 [2024-11-08 07:43:35.151916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84240 len:8 PRP1 0x0 PRP2 0x0 00:16:23.916 [2024-11-08 07:43:35.151928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.916 [2024-11-08 07:43:35.151941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.151951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.151961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84248 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.151973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.151997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84256 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84264 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84752 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84760 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84768 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84776 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84784 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84792 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84800 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84808 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84816 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84824 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84832 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84840 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84272 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84280 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84288 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.152801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.152811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84296 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.152824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.152836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.168463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.168500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84304 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.168516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.168537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.168547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.168557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84312 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.168570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.168584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.168593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.168603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84320 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.168617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.168630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:23.917 [2024-11-08 07:43:35.168640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:23.917 [2024-11-08 07:43:35.168650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84328 len:8 PRP1 0x0 PRP2 0x0 00:16:23.917 [2024-11-08 07:43:35.168663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.917 [2024-11-08 07:43:35.168722] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:16:23.918 [2024-11-08 07:43:35.168781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.918 [2024-11-08 07:43:35.168798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.918 [2024-11-08 07:43:35.168813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.918 [2024-11-08 07:43:35.168826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.918 [2024-11-08 07:43:35.168840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.918 [2024-11-08 07:43:35.168854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.918 [2024-11-08 07:43:35.168881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.918 [2024-11-08 07:43:35.168894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.918 [2024-11-08 07:43:35.168908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:16:23.918 [2024-11-08 07:43:35.168957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa95710 (9): Bad file descriptor 00:16:23.918 [2024-11-08 07:43:35.172253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:16:23.918 [2024-11-08 07:43:35.194027] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:16:23.918 11231.30 IOPS, 43.87 MiB/s [2024-11-08T07:43:41.879Z] 11256.64 IOPS, 43.97 MiB/s [2024-11-08T07:43:41.879Z] 11287.75 IOPS, 44.09 MiB/s [2024-11-08T07:43:41.879Z] 11317.62 IOPS, 44.21 MiB/s [2024-11-08T07:43:41.879Z] 11238.64 IOPS, 43.90 MiB/s [2024-11-08T07:43:41.879Z] 11170.20 IOPS, 43.63 MiB/s 00:16:23.918 Latency(us) 00:16:23.918 [2024-11-08T07:43:41.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.918 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:23.918 Verification LBA range: start 0x0 length 0x4000 00:16:23.918 NVMe0n1 : 15.01 11170.30 43.63 260.44 0.00 11174.95 446.66 25340.59 00:16:23.918 [2024-11-08T07:43:41.879Z] =================================================================================================================== 00:16:23.918 [2024-11-08T07:43:41.879Z] Total : 11170.30 43.63 260.44 0.00 11174.95 446.66 25340.59 00:16:23.918 Received shutdown signal, test time was about 15.000000 seconds 00:16:23.918 00:16:23.918 Latency(us) 00:16:23.918 [2024-11-08T07:43:41.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.918 [2024-11-08T07:43:41.879Z] =================================================================================================================== 00:16:23.918 [2024-11-08T07:43:41.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:23.918 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:23.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.918 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:23.918 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:23.918 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75213 00:16:23.918 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75213 /var/tmp/bdevperf.sock 00:16:23.918 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75213 ']' 00:16:23.918 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.918 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:23.918 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.918 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:23.918 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:23.918 07:43:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:23.918 07:43:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:23.918 07:43:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:16:23.918 07:43:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:23.918 [2024-11-08 07:43:41.581271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:23.918 07:43:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:23.918 [2024-11-08 07:43:41.777419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:23.918 07:43:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:24.177 NVMe0n1 00:16:24.177 07:43:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:24.744 00:16:24.744 07:43:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:25.002 00:16:25.002 07:43:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:25.002 07:43:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:25.261 07:43:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:25.261 07:43:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:28.549 07:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:28.549 07:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:28.549 07:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:28.549 07:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75288 00:16:28.550 07:43:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75288 00:16:29.926 { 00:16:29.926 "results": [ 00:16:29.926 { 00:16:29.926 "job": "NVMe0n1", 00:16:29.926 "core_mask": "0x1", 00:16:29.926 "workload": "verify", 00:16:29.926 "status": "finished", 00:16:29.926 "verify_range": { 00:16:29.926 "start": 0, 00:16:29.926 "length": 16384 00:16:29.926 }, 00:16:29.926 "queue_depth": 128, 00:16:29.926 "io_size": 4096, 00:16:29.926 "runtime": 1.006717, 00:16:29.926 "iops": 10293.856168118746, 00:16:29.926 "mibps": 40.21037565671385, 00:16:29.926 "io_failed": 0, 00:16:29.926 "io_timeout": 0, 00:16:29.926 "avg_latency_us": 12370.006955514813, 00:16:29.926 "min_latency_us": 928.4266666666666, 00:16:29.926 "max_latency_us": 14043.42857142857 00:16:29.926 } 00:16:29.926 ], 00:16:29.926 "core_count": 1 00:16:29.926 } 00:16:29.926 07:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:29.926 [2024-11-08 07:43:41.044108] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:16:29.926 [2024-11-08 07:43:41.045037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75213 ] 00:16:29.926 [2024-11-08 07:43:41.194300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.926 [2024-11-08 07:43:41.239883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.926 [2024-11-08 07:43:41.280627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:29.926 [2024-11-08 07:43:43.160152] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:29.926 [2024-11-08 07:43:43.160252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.926 [2024-11-08 07:43:43.160272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.926 [2024-11-08 07:43:43.160288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.926 [2024-11-08 07:43:43.160301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.927 [2024-11-08 07:43:43.160315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.927 [2024-11-08 07:43:43.160328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.927 [2024-11-08 07:43:43.160341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.927 [2024-11-08 07:43:43.160354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.927 [2024-11-08 07:43:43.160368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:16:29.927 [2024-11-08 07:43:43.160410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:16:29.927 [2024-11-08 07:43:43.160432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x674710 (9): Bad file descriptor 00:16:29.927 [2024-11-08 07:43:43.167710] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:16:29.927 Running I/O for 1 seconds... 00:16:29.927 10235.00 IOPS, 39.98 MiB/s 00:16:29.927 Latency(us) 00:16:29.927 [2024-11-08T07:43:47.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.927 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:29.927 Verification LBA range: start 0x0 length 0x4000 00:16:29.927 NVMe0n1 : 1.01 10293.86 40.21 0.00 0.00 12370.01 928.43 14043.43 00:16:29.927 [2024-11-08T07:43:47.888Z] =================================================================================================================== 00:16:29.927 [2024-11-08T07:43:47.888Z] Total : 10293.86 40.21 0.00 0.00 12370.01 928.43 14043.43 00:16:29.927 07:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:29.927 07:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:29.927 07:43:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:30.185 07:43:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:30.186 07:43:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:30.444 07:43:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:30.706 07:43:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:34.020 07:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:34.020 07:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:34.020 07:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75213 00:16:34.020 07:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75213 ']' 00:16:34.020 07:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75213 00:16:34.020 07:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:16:34.020 07:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:34.020 07:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75213 00:16:34.020 killing process with pid 75213 00:16:34.020 07:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:34.020 07:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:34.020 07:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75213' 00:16:34.020 07:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75213 00:16:34.020 07:43:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75213 00:16:34.279 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:34.279 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:34.538 rmmod nvme_tcp 00:16:34.538 rmmod nvme_fabrics 00:16:34.538 rmmod nvme_keyring 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 74971 ']' 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 74971 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 74971 ']' 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 74971 00:16:34.538 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:16:34.797 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:34.797 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74971 00:16:34.797 killing process with pid 74971 00:16:34.797 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:34.797 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:34.797 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74971' 00:16:34.797 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 74971 00:16:34.797 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 74971 00:16:35.056 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:35.056 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:35.056 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:35.056 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:16:35.056 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:16:35.056 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:35.056 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:16:35.056 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.056 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:35.057 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:35.057 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:35.057 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:35.057 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.057 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:35.057 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:35.057 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:35.057 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:35.057 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:35.057 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:35.057 07:43:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:35.316 07:43:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.316 07:43:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.316 07:43:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:35.316 07:43:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.316 07:43:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.316 07:43:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.316 07:43:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:16:35.316 00:16:35.316 real 0m31.622s 00:16:35.316 user 2m0.201s 00:16:35.316 sys 0m6.485s 00:16:35.316 07:43:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:35.316 07:43:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:35.316 ************************************ 00:16:35.316 END TEST nvmf_failover 00:16:35.316 ************************************ 00:16:35.316 07:43:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:35.316 07:43:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:35.316 07:43:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:35.316 07:43:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.316 ************************************ 00:16:35.316 START TEST nvmf_host_discovery 00:16:35.316 ************************************ 00:16:35.316 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:35.316 * Looking for test storage... 00:16:35.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:35.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.576 --rc genhtml_branch_coverage=1 00:16:35.576 --rc genhtml_function_coverage=1 00:16:35.576 --rc genhtml_legend=1 00:16:35.576 --rc geninfo_all_blocks=1 00:16:35.576 --rc geninfo_unexecuted_blocks=1 00:16:35.576 00:16:35.576 ' 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:35.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.576 --rc genhtml_branch_coverage=1 00:16:35.576 --rc genhtml_function_coverage=1 00:16:35.576 --rc genhtml_legend=1 00:16:35.576 --rc geninfo_all_blocks=1 00:16:35.576 --rc geninfo_unexecuted_blocks=1 00:16:35.576 00:16:35.576 ' 00:16:35.576 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:35.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.576 --rc genhtml_branch_coverage=1 00:16:35.576 --rc genhtml_function_coverage=1 00:16:35.576 --rc genhtml_legend=1 00:16:35.576 --rc geninfo_all_blocks=1 00:16:35.576 --rc geninfo_unexecuted_blocks=1 00:16:35.576 00:16:35.576 ' 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:35.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.577 --rc genhtml_branch_coverage=1 00:16:35.577 --rc genhtml_function_coverage=1 00:16:35.577 --rc genhtml_legend=1 00:16:35.577 --rc geninfo_all_blocks=1 00:16:35.577 --rc geninfo_unexecuted_blocks=1 00:16:35.577 00:16:35.577 ' 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:35.577 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.577 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:35.578 Cannot find device "nvmf_init_br" 00:16:35.578 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:35.578 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:35.578 Cannot find device "nvmf_init_br2" 00:16:35.578 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:35.578 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:35.578 Cannot find device "nvmf_tgt_br" 00:16:35.578 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:16:35.578 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.578 Cannot find device "nvmf_tgt_br2" 00:16:35.578 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:16:35.578 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:35.578 Cannot find device "nvmf_init_br" 00:16:35.578 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:16:35.578 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:35.578 Cannot find device "nvmf_init_br2" 00:16:35.578 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:16:35.578 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:35.578 Cannot find device "nvmf_tgt_br" 00:16:35.578 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:16:35.578 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:35.837 Cannot find device "nvmf_tgt_br2" 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:35.837 Cannot find device "nvmf_br" 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:35.837 Cannot find device "nvmf_init_if" 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:35.837 Cannot find device "nvmf_init_if2" 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.837 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:35.838 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:35.838 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.838 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:35.838 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:36.097 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:36.097 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:36.097 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:36.097 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:36.097 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:36.097 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:36.097 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:36.097 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:36.097 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:36.097 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:16:36.097 00:16:36.097 --- 10.0.0.3 ping statistics --- 00:16:36.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.098 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:36.098 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:36.098 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:16:36.098 00:16:36.098 --- 10.0.0.4 ping statistics --- 00:16:36.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.098 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:36.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:36.098 00:16:36.098 --- 10.0.0.1 ping statistics --- 00:16:36.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.098 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:36.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:16:36.098 00:16:36.098 --- 10.0.0.2 ping statistics --- 00:16:36.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.098 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75616 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75616 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 75616 ']' 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:36.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:36.098 07:43:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:36.098 [2024-11-08 07:43:53.937768] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:16:36.098 [2024-11-08 07:43:53.937854] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.357 [2024-11-08 07:43:54.095840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.357 [2024-11-08 07:43:54.151099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.357 [2024-11-08 07:43:54.151163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.357 [2024-11-08 07:43:54.151180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.357 [2024-11-08 07:43:54.151193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.357 [2024-11-08 07:43:54.151204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.357 [2024-11-08 07:43:54.151572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.357 [2024-11-08 07:43:54.203082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:36.926 [2024-11-08 07:43:54.860524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:36.926 [2024-11-08 07:43:54.868659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:36.926 null0 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.926 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:37.186 null1 00:16:37.186 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.186 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:37.186 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.186 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:37.186 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.186 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75647 00:16:37.186 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75647 /tmp/host.sock 00:16:37.186 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 75647 ']' 00:16:37.186 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:16:37.186 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:37.186 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:37.186 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:37.186 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:37.186 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:37.186 07:43:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:37.186 [2024-11-08 07:43:54.960776] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:16:37.186 [2024-11-08 07:43:54.960863] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75647 ] 00:16:37.186 [2024-11-08 07:43:55.124626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.445 [2024-11-08 07:43:55.200295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.445 [2024-11-08 07:43:55.285381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:38.013 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:38.272 07:43:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.272 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:38.272 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:38.272 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.272 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:38.272 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:38.272 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.273 [2024-11-08 07:43:56.168866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:38.273 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:16:38.532 07:43:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:16:39.100 [2024-11-08 07:43:56.863280] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:39.100 [2024-11-08 07:43:56.863309] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:39.100 [2024-11-08 07:43:56.863326] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:39.100 [2024-11-08 07:43:56.869310] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:39.100 [2024-11-08 07:43:56.923631] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:39.100 [2024-11-08 07:43:56.924468] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x84ce50:1 started. 00:16:39.100 [2024-11-08 07:43:56.926184] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:39.100 [2024-11-08 07:43:56.926205] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:39.100 [2024-11-08 07:43:56.932033] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x84ce50 was disconnected and freed. delete nvme_qpair. 00:16:39.668 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:39.668 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.669 [2024-11-08 07:43:57.585033] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x85af80:1 started. 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:39.669 [2024-11-08 07:43:57.592771] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x85af80 was disconnected and freed. delete nvme_qpair. 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:39.669 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.929 [2024-11-08 07:43:57.698321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:39.929 [2024-11-08 07:43:57.699324] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:39.929 [2024-11-08 07:43:57.699460] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:39.929 [2024-11-08 07:43:57.705327] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:39.929 [2024-11-08 07:43:57.767996] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:16:39.929 [2024-11-08 07:43:57.768038] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:39.929 [2024-11-08 07:43:57.768047] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:39.929 [2024-11-08 07:43:57.768053] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:39.929 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.930 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.189 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:40.189 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:40.189 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:16:40.189 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:40.189 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:40.189 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.189 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.189 [2024-11-08 07:43:57.899481] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:40.189 [2024-11-08 07:43:57.899506] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:40.189 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.189 [2024-11-08 07:43:57.903741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:40.189 id:0 cdw10:00000000 cdw11:00000000 00:16:40.189 [2024-11-08 07:43:57.903924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.189 [2024-11-08 07:43:57.904044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.189 [2024-11-08 07:43:57.904135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.189 [2024-11-08 07:43:57.904188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.189 [2024-11-08 07:43:57.904278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.190 [2024-11-08 07:43:57.904327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:40.190 [2024-11-08 07:43:57.904439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:40.190 [2024-11-08 07:43:57.904536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x829230 is same with the state(6) to be set 00:16:40.190 [2024-11-08 07:43:57.905633] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:16:40.190 [2024-11-08 07:43:57.905756] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io. 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:40.190 spdk:cnode0:10.0.0.3:4421 found again 00:16:40.190 [2024-11-08 07:43:57.905900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x829230 (9): Bad file descriptor 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.190 07:43:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.190 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.450 07:43:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.389 [2024-11-08 07:43:59.274129] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:41.389 [2024-11-08 07:43:59.274285] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:41.389 [2024-11-08 07:43:59.274314] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:41.389 [2024-11-08 07:43:59.280154] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:16:41.389 [2024-11-08 07:43:59.338414] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:16:41.389 [2024-11-08 07:43:59.339018] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x821eb0:1 started. 00:16:41.389 [2024-11-08 07:43:59.340901] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:41.389 [2024-11-08 07:43:59.341069] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:41.389 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.389 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:41.389 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:41.389 [2024-11-08 07:43:59.343240] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x821eb0 was disconnected and freed. delete nvme_qpair. 00:16:41.389 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:41.389 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:41.389 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.389 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.648 request: 00:16:41.648 { 00:16:41.648 "name": "nvme", 00:16:41.648 "trtype": "tcp", 00:16:41.648 "traddr": "10.0.0.3", 00:16:41.648 "adrfam": "ipv4", 00:16:41.648 "trsvcid": "8009", 00:16:41.648 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:41.648 "wait_for_attach": true, 00:16:41.648 "method": "bdev_nvme_start_discovery", 00:16:41.648 "req_id": 1 00:16:41.648 } 00:16:41.648 Got JSON-RPC error response 00:16:41.648 response: 00:16:41.648 { 00:16:41.648 "code": -17, 00:16:41.648 "message": "File exists" 00:16:41.648 } 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.648 request: 00:16:41.648 { 00:16:41.648 "name": "nvme_second", 00:16:41.648 "trtype": "tcp", 00:16:41.648 "traddr": "10.0.0.3", 00:16:41.648 "adrfam": "ipv4", 00:16:41.648 "trsvcid": "8009", 00:16:41.648 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:41.648 "wait_for_attach": true, 00:16:41.648 "method": "bdev_nvme_start_discovery", 00:16:41.648 "req_id": 1 00:16:41.648 } 00:16:41.648 Got JSON-RPC error response 00:16:41.648 response: 00:16:41.648 { 00:16:41.648 "code": -17, 00:16:41.648 "message": "File exists" 00:16:41.648 } 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.648 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.649 07:43:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.027 [2024-11-08 07:44:00.577535] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.027 [2024-11-08 07:44:00.577577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x84de40 with addr=10.0.0.3, port=8010 00:16:43.027 [2024-11-08 07:44:00.577613] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:43.027 [2024-11-08 07:44:00.577622] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:43.027 [2024-11-08 07:44:00.577631] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:43.964 [2024-11-08 07:44:01.577529] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:43.964 [2024-11-08 07:44:01.577567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x84de40 with addr=10.0.0.3, port=8010 00:16:43.964 [2024-11-08 07:44:01.577599] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:43.964 [2024-11-08 07:44:01.577608] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:43.964 [2024-11-08 07:44:01.577616] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:44.902 [2024-11-08 07:44:02.577467] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:16:44.902 request: 00:16:44.902 { 00:16:44.902 "name": "nvme_second", 00:16:44.902 "trtype": "tcp", 00:16:44.902 "traddr": "10.0.0.3", 00:16:44.902 "adrfam": "ipv4", 00:16:44.902 "trsvcid": "8010", 00:16:44.902 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:44.902 "wait_for_attach": false, 00:16:44.902 "attach_timeout_ms": 3000, 00:16:44.902 "method": "bdev_nvme_start_discovery", 00:16:44.902 "req_id": 1 00:16:44.902 } 00:16:44.902 Got JSON-RPC error response 00:16:44.902 response: 00:16:44.902 { 00:16:44.902 "code": -110, 00:16:44.902 "message": "Connection timed out" 00:16:44.902 } 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75647 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:44.902 rmmod nvme_tcp 00:16:44.902 rmmod nvme_fabrics 00:16:44.902 rmmod nvme_keyring 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75616 ']' 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75616 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 75616 ']' 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 75616 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75616 00:16:44.902 killing process with pid 75616 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75616' 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 75616 00:16:44.902 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 75616 00:16:45.161 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:45.161 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:45.161 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:45.161 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:16:45.161 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:45.161 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:45.161 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:45.161 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:45.161 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:45.161 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:45.161 07:44:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:45.161 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:45.161 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:45.161 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:45.161 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:45.161 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:45.161 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:45.161 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:45.161 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:45.420 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:45.420 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:45.420 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:45.420 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:45.420 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.420 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.420 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.420 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:16:45.420 00:16:45.420 real 0m10.059s 00:16:45.420 user 0m17.958s 00:16:45.420 sys 0m2.574s 00:16:45.420 ************************************ 00:16:45.421 END TEST nvmf_host_discovery 00:16:45.421 ************************************ 00:16:45.421 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:45.421 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:45.421 07:44:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:45.421 07:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:45.421 07:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:45.421 07:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.421 ************************************ 00:16:45.421 START TEST nvmf_host_multipath_status 00:16:45.421 ************************************ 00:16:45.421 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:45.421 * Looking for test storage... 00:16:45.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:16:45.680 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:45.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.681 --rc genhtml_branch_coverage=1 00:16:45.681 --rc genhtml_function_coverage=1 00:16:45.681 --rc genhtml_legend=1 00:16:45.681 --rc geninfo_all_blocks=1 00:16:45.681 --rc geninfo_unexecuted_blocks=1 00:16:45.681 00:16:45.681 ' 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:45.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.681 --rc genhtml_branch_coverage=1 00:16:45.681 --rc genhtml_function_coverage=1 00:16:45.681 --rc genhtml_legend=1 00:16:45.681 --rc geninfo_all_blocks=1 00:16:45.681 --rc geninfo_unexecuted_blocks=1 00:16:45.681 00:16:45.681 ' 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:45.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.681 --rc genhtml_branch_coverage=1 00:16:45.681 --rc genhtml_function_coverage=1 00:16:45.681 --rc genhtml_legend=1 00:16:45.681 --rc geninfo_all_blocks=1 00:16:45.681 --rc geninfo_unexecuted_blocks=1 00:16:45.681 00:16:45.681 ' 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:45.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.681 --rc genhtml_branch_coverage=1 00:16:45.681 --rc genhtml_function_coverage=1 00:16:45.681 --rc genhtml_legend=1 00:16:45.681 --rc geninfo_all_blocks=1 00:16:45.681 --rc geninfo_unexecuted_blocks=1 00:16:45.681 00:16:45.681 ' 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:45.681 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.681 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:45.682 Cannot find device "nvmf_init_br" 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:45.682 Cannot find device "nvmf_init_br2" 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:45.682 Cannot find device "nvmf_tgt_br" 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:45.682 Cannot find device "nvmf_tgt_br2" 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:45.682 Cannot find device "nvmf_init_br" 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:45.682 Cannot find device "nvmf_init_br2" 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:45.682 Cannot find device "nvmf_tgt_br" 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:45.682 Cannot find device "nvmf_tgt_br2" 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:16:45.682 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:45.941 Cannot find device "nvmf_br" 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:45.941 Cannot find device "nvmf_init_if" 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:45.941 Cannot find device "nvmf_init_if2" 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:45.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:45.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:45.941 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:46.201 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:46.201 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:16:46.201 00:16:46.201 --- 10.0.0.3 ping statistics --- 00:16:46.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.201 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:46.201 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:46.201 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:16:46.201 00:16:46.201 --- 10.0.0.4 ping statistics --- 00:16:46.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.201 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:46.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:46.201 00:16:46.201 --- 10.0.0.1 ping statistics --- 00:16:46.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.201 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:46.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:16:46.201 00:16:46.201 --- 10.0.0.2 ping statistics --- 00:16:46.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.201 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76146 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76146 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 76146 ']' 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:46.201 07:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:46.201 [2024-11-08 07:44:04.060017] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:16:46.201 [2024-11-08 07:44:04.060104] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.460 [2024-11-08 07:44:04.218058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:46.460 [2024-11-08 07:44:04.272933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.460 [2024-11-08 07:44:04.273021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.460 [2024-11-08 07:44:04.273037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.460 [2024-11-08 07:44:04.273051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.460 [2024-11-08 07:44:04.273062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.460 [2024-11-08 07:44:04.274261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.460 [2024-11-08 07:44:04.274273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.460 [2024-11-08 07:44:04.324738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:47.028 07:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:47.028 07:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:16:47.028 07:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:47.028 07:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:47.028 07:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:47.288 07:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.288 07:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76146 00:16:47.288 07:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:47.547 [2024-11-08 07:44:05.281602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.547 07:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:47.547 Malloc0 00:16:47.807 07:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:47.807 07:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:48.066 07:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:48.326 [2024-11-08 07:44:06.209869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:48.326 07:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:48.585 [2024-11-08 07:44:06.405969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:48.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:48.585 07:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76201 00:16:48.585 07:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:48.585 07:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:48.586 07:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76201 /var/tmp/bdevperf.sock 00:16:48.586 07:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 76201 ']' 00:16:48.586 07:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:48.586 07:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:48.586 07:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:48.586 07:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:48.586 07:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:49.522 07:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:49.522 07:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:16:49.522 07:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:49.781 07:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:50.040 Nvme0n1 00:16:50.040 07:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:50.299 Nvme0n1 00:16:50.299 07:44:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:50.299 07:44:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:52.832 07:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:52.832 07:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:52.832 07:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:52.832 07:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:53.769 07:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:53.769 07:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:53.769 07:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:53.769 07:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:54.028 07:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.028 07:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:54.028 07:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.028 07:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:54.596 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:54.596 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:54.596 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.596 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:54.596 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.596 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:54.596 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:54.596 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.855 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:54.855 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:54.855 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:54.855 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:55.113 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.113 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:55.113 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:55.113 07:44:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:55.371 07:44:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:55.371 07:44:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:55.371 07:44:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:55.630 07:44:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:55.888 07:44:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:56.825 07:44:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:56.825 07:44:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:56.825 07:44:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:56.825 07:44:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.084 07:44:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:57.084 07:44:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:57.084 07:44:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.084 07:44:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:57.084 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.084 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:57.084 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:57.084 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.343 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.343 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:57.343 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.343 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:57.603 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.603 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:57.603 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.603 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:57.862 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.862 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:57.862 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:57.862 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.121 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.121 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:58.121 07:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:58.380 07:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:58.638 07:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:59.575 07:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:59.575 07:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:59.575 07:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:59.575 07:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.834 07:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.834 07:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:59.834 07:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:59.834 07:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.093 07:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:00.093 07:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:00.093 07:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.093 07:44:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:00.352 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.352 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:00.352 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:00.352 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.352 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.352 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:00.352 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.352 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:00.611 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.611 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:00.611 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.611 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:00.869 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.869 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:00.869 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:01.128 07:44:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:01.387 07:44:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:02.323 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:02.323 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:02.323 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.323 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:02.596 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.596 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:02.596 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.596 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:02.861 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:02.861 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:02.861 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:02.861 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.120 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.120 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:03.120 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:03.120 07:44:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.380 07:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.380 07:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:03.380 07:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.380 07:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:03.640 07:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.640 07:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:03.640 07:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:03.640 07:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.640 07:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:03.640 07:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:03.640 07:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:03.899 07:44:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:04.159 07:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:05.096 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:05.096 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:05.096 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.096 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:05.356 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:05.356 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:05.356 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:05.356 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.614 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:05.614 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:05.615 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:05.615 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.874 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.874 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:05.874 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.874 07:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:06.133 07:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.133 07:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:06.133 07:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.133 07:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:06.392 07:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:06.392 07:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:06.392 07:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.392 07:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:06.651 07:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:06.652 07:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:06.652 07:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:06.910 07:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:06.910 07:44:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:08.287 07:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:08.287 07:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:08.287 07:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:08.288 07:44:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.288 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:08.288 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:08.288 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.288 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:08.546 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:08.546 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:08.546 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.546 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:08.805 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:08.805 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:08.805 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.805 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:08.805 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:08.805 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:08.806 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.806 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:09.064 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:09.064 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:09.064 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.065 07:44:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:09.323 07:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:09.323 07:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:09.582 07:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:09.582 07:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:09.841 07:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:10.100 07:44:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:11.040 07:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:11.040 07:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:11.040 07:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.040 07:44:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:11.299 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:11.299 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:11.299 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:11.299 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.558 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:11.558 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:11.558 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.558 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:11.558 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:11.558 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:11.558 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.558 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:11.817 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:11.817 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:11.817 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.817 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:12.075 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.075 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:12.075 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.075 07:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:12.333 07:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:12.333 07:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:12.333 07:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:12.592 07:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:12.850 07:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:13.786 07:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:13.786 07:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:13.786 07:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:13.786 07:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:14.046 07:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:14.046 07:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:14.046 07:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.046 07:44:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:14.305 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:14.305 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:14.305 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:14.305 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.305 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:14.305 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:14.305 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:14.305 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.564 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:14.564 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:14.564 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.564 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:14.822 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:14.822 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:14.822 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.823 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:15.081 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:15.081 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:15.081 07:44:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:15.339 07:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:15.598 07:44:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:16.560 07:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:16.560 07:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:16.560 07:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:16.560 07:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:16.819 07:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:16.819 07:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:16.819 07:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:16.819 07:44:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:17.386 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:17.386 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:17.386 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.386 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:17.386 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:17.386 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:17.386 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.386 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:17.645 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:17.645 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:17.645 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:17.645 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.903 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:17.904 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:17.904 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.904 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:18.161 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:18.161 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:18.161 07:44:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:18.161 07:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:18.419 07:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:19.796 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:19.796 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:19.796 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:19.796 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:19.796 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:19.796 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:19.796 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:19.796 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:19.796 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:19.796 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:19.796 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:19.796 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:20.055 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:20.055 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:20.055 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:20.055 07:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:20.315 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:20.315 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:20.315 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:20.315 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:20.574 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:20.574 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:20.574 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:20.574 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:20.833 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:20.833 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76201 00:17:20.833 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 76201 ']' 00:17:20.833 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 76201 00:17:20.833 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:17:20.833 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:20.833 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76201 00:17:20.833 killing process with pid 76201 00:17:20.833 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:17:20.833 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:17:20.833 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76201' 00:17:20.833 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 76201 00:17:20.833 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 76201 00:17:20.833 { 00:17:20.833 "results": [ 00:17:20.833 { 00:17:20.833 "job": "Nvme0n1", 00:17:20.833 "core_mask": "0x4", 00:17:20.833 "workload": "verify", 00:17:20.833 "status": "terminated", 00:17:20.833 "verify_range": { 00:17:20.833 "start": 0, 00:17:20.833 "length": 16384 00:17:20.833 }, 00:17:20.833 "queue_depth": 128, 00:17:20.833 "io_size": 4096, 00:17:20.833 "runtime": 30.353, 00:17:20.833 "iops": 8862.254142918327, 00:17:20.833 "mibps": 34.61818024577472, 00:17:20.833 "io_failed": 0, 00:17:20.833 "io_timeout": 0, 00:17:20.833 "avg_latency_us": 14423.870520036056, 00:17:20.833 "min_latency_us": 145.31047619047618, 00:17:20.833 "max_latency_us": 4026531.84 00:17:20.833 } 00:17:20.833 ], 00:17:20.833 "core_count": 1 00:17:20.833 } 00:17:21.100 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76201 00:17:21.100 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:21.101 [2024-11-08 07:44:06.463107] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:17:21.101 [2024-11-08 07:44:06.463189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76201 ] 00:17:21.101 [2024-11-08 07:44:06.604045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.101 [2024-11-08 07:44:06.646771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.101 [2024-11-08 07:44:06.689061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:21.101 Running I/O for 90 seconds... 00:17:21.101 9296.00 IOPS, 36.31 MiB/s [2024-11-08T07:44:39.062Z] 10152.00 IOPS, 39.66 MiB/s [2024-11-08T07:44:39.062Z] 10389.33 IOPS, 40.58 MiB/s [2024-11-08T07:44:39.062Z] 10520.00 IOPS, 41.09 MiB/s [2024-11-08T07:44:39.062Z] 10576.00 IOPS, 41.31 MiB/s [2024-11-08T07:44:39.062Z] 10486.50 IOPS, 40.96 MiB/s [2024-11-08T07:44:39.062Z] 10427.29 IOPS, 40.73 MiB/s [2024-11-08T07:44:39.062Z] 10398.75 IOPS, 40.62 MiB/s [2024-11-08T07:44:39.062Z] 10431.44 IOPS, 40.75 MiB/s [2024-11-08T07:44:39.062Z] 10434.70 IOPS, 40.76 MiB/s [2024-11-08T07:44:39.062Z] 10450.45 IOPS, 40.82 MiB/s [2024-11-08T07:44:39.062Z] 10454.25 IOPS, 40.84 MiB/s [2024-11-08T07:44:39.062Z] 10455.00 IOPS, 40.84 MiB/s [2024-11-08T07:44:39.062Z] [2024-11-08 07:44:21.802399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.802463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.802515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.802530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.802548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.802561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.802579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.802591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.802609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.802621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.802646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.802675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.802695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.802710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.802730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.802745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.802764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.101 [2024-11-08 07:44:21.802778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.802835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.101 [2024-11-08 07:44:21.802849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.802867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.101 [2024-11-08 07:44:21.802883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.802901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.101 [2024-11-08 07:44:21.802915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.802933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.101 [2024-11-08 07:44:21.802946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.802966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.101 [2024-11-08 07:44:21.802980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.101 [2024-11-08 07:44:21.803022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.101 [2024-11-08 07:44:21.803055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:21.101 [2024-11-08 07:44:21.803637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.101 [2024-11-08 07:44:21.803651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.803671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.803691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.803710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.803724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.803744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.803757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.803776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.803789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.803808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.803822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.803841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.803866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.803884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.803897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.803915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.803929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.803947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.803960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.803978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.803999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.804444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.804474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.804511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.804541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.804572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.804602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.804636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.102 [2024-11-08 07:44:21.804667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:21.102 [2024-11-08 07:44:21.804904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.102 [2024-11-08 07:44:21.804917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.804935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.804948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.804965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.804985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.805016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.805047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.805078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.805108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.805148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.805180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.103 [2024-11-08 07:44:21.805938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.805969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.805996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.806010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.806028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.806042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.806068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.806081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.806100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.806114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.806132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.806146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:21.103 [2024-11-08 07:44:21.806165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.103 [2024-11-08 07:44:21.806181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.806818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-11-08 07:44:21.806842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.806872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.806886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.806911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.806925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.806949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.806963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.806987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.807011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.807036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.807051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.807076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.807089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.807113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.807128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.811248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.811272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.811301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.811315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.811340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.811354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.811378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.811392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.811416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.811432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.811457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.811471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.811495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.811507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.811532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.811547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:21.811574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:21.811587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:21.104 10073.36 IOPS, 39.35 MiB/s [2024-11-08T07:44:39.065Z] 9401.80 IOPS, 36.73 MiB/s [2024-11-08T07:44:39.065Z] 8814.19 IOPS, 34.43 MiB/s [2024-11-08T07:44:39.065Z] 8295.71 IOPS, 32.41 MiB/s [2024-11-08T07:44:39.065Z] 8122.44 IOPS, 31.73 MiB/s [2024-11-08T07:44:39.065Z] 8239.32 IOPS, 32.18 MiB/s [2024-11-08T07:44:39.065Z] 8315.40 IOPS, 32.48 MiB/s [2024-11-08T07:44:39.065Z] 8361.90 IOPS, 32.66 MiB/s [2024-11-08T07:44:39.065Z] 8411.73 IOPS, 32.86 MiB/s [2024-11-08T07:44:39.065Z] 8486.78 IOPS, 33.15 MiB/s [2024-11-08T07:44:39.065Z] 8552.17 IOPS, 33.41 MiB/s [2024-11-08T07:44:39.065Z] 8619.68 IOPS, 33.67 MiB/s [2024-11-08T07:44:39.065Z] 8658.69 IOPS, 33.82 MiB/s [2024-11-08T07:44:39.065Z] 8684.78 IOPS, 33.92 MiB/s [2024-11-08T07:44:39.065Z] 8711.11 IOPS, 34.03 MiB/s [2024-11-08T07:44:39.065Z] [2024-11-08 07:44:36.332426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:36.332500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.332531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:36.332546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.332595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:36.332609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.332627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-11-08 07:44:36.332640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.332659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-11-08 07:44:36.332672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.332691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-11-08 07:44:36.332705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.332724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-11-08 07:44:36.332737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.332755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:36.332769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.332786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:36.332799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.332817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:36.332830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.332850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-11-08 07:44:36.332864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.332882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-11-08 07:44:36.332896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.332914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-11-08 07:44:36.332927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.332947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-11-08 07:44:36.332960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.332999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-11-08 07:44:36.333013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.333032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-11-08 07:44:36.333045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.333063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-11-08 07:44:36.333076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.333096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.104 [2024-11-08 07:44:36.333109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.333127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.104 [2024-11-08 07:44:36.333140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:21.104 [2024-11-08 07:44:36.333158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.105 [2024-11-08 07:44:36.333170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.333189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.333202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.333232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.333246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.333265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.333277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.333296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.333310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.333328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.333341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.333361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.333375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.333402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.333416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.333434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.333447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.333466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.105 [2024-11-08 07:44:36.333479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.333499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.105 [2024-11-08 07:44:36.333513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.333532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.333546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.334357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.334396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.105 [2024-11-08 07:44:36.334429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.105 [2024-11-08 07:44:36.334462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.334495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.334527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.334559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.334602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.334643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.334693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.105 [2024-11-08 07:44:36.334726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.105 [2024-11-08 07:44:36.334758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.105 [2024-11-08 07:44:36.334791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.105 [2024-11-08 07:44:36.334825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.105 [2024-11-08 07:44:36.334859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.105 [2024-11-08 07:44:36.334893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.105 [2024-11-08 07:44:36.334926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.105 [2024-11-08 07:44:36.334960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.334979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.335003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.335023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.335040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:21.105 [2024-11-08 07:44:36.335063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.105 [2024-11-08 07:44:36.335077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.335208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.335436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.335476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.335508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.335542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.335577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.335610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.335643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.335676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.335709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.335742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.335775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.335973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.335995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.336014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.336027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.336045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.336059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.336077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.336091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.336110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.336123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.336917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.336944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.336966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.336992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.337011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.337024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.337044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.337058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.337093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.337107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.337125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.337138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.337157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.106 [2024-11-08 07:44:36.337171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:21.106 [2024-11-08 07:44:36.337189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.106 [2024-11-08 07:44:36.337202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.337222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.337235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.337254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.337267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.337286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.337299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.337318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.337332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.337352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.337365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.338105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.338140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.338173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.338214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.338247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.338280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.338311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.338344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.338376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.338407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.338438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.338470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.338502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.338534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.338577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.338597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.338616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.339207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.339242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.339276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.339308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.339342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.339376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.339409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.339442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.339474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.339506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.339540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.339572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.339613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.339645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.339665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.339686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.340595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.340619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.340641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.107 [2024-11-08 07:44:36.340655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:21.107 [2024-11-08 07:44:36.340674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.107 [2024-11-08 07:44:36.340688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.340706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.340719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.340738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.340752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.340771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.340784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.340802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.340816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.340833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.340847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.340866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.340879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.340907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.340921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.340939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.340952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.340972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.340998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.341016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.341030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.341048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.341062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.341080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.341094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.341113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.341130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.341148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.341162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.341180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.341194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.341212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.341226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.342111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.342146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.342211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.342245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.342276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.342307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.342339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.342370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.342402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.342434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.342466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.342498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.342534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.342567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.342585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.342606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.343557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.343583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.343605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.343620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.343640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.343654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.343673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.343688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.343707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.343721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.343741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.108 [2024-11-08 07:44:36.343755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.343775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.108 [2024-11-08 07:44:36.343789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:21.108 [2024-11-08 07:44:36.343819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.343833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.343852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.343866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.343885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.343898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.343917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.343930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.343948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.343975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.344004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.344040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.344059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.344076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.344095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.344108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.344127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.344141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.344160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.344173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.344192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.344206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.344224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.344238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.344257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.344270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.344289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.344302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.344830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.344852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.344874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.344887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.344906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.344920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.344949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:118304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.344962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.344995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.345010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.345028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.345041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.345060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.345074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.345092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.345106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.345125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.345140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.345160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.345173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.345192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.345205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.345225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.345239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.345259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.345272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.346118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.346142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.346164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.346178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.346205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.346219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.346237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.346250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.346270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.346283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.346302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.109 [2024-11-08 07:44:36.346316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.346335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.346348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:21.109 [2024-11-08 07:44:36.346366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.109 [2024-11-08 07:44:36.346382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.346401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.346414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.346433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.346447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.346466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.346479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.346498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.346512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.346530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.346544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.346562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.346575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.346600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.346614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.346640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.346655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.346690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.346704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.347146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.347169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.347191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.347205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.347224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.347238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.347257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.347271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.347290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.347304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.347323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.347337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.347356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.347370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.347390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.347404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.347424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.347438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.347457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.347479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.347499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.347513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.347532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.347547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.348399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.348434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.348466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.348498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.348530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.348561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.348594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.348625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.348658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.348705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.348737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.348770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.348803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.348835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.348867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.348899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.110 [2024-11-08 07:44:36.348931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:21.110 [2024-11-08 07:44:36.348949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.110 [2024-11-08 07:44:36.348962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.348991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.349006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.349639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.349662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.349683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.349697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.349715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.349729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.349756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.349770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.349788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.349802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.349820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.349834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.349852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.349865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.349884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.349898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.349916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.349934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.349959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.349973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.350005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.350019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.350038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.350052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.350070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.350083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.350103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.350116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.350135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.350148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.350173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.350187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.350206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.350219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.351189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.351225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.351259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.351293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.351325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.351359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.351394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.351427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.351465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.351498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.351540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.351573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.351606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.351641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.351659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.351673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.352282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.352306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.352338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.352352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.352371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.111 [2024-11-08 07:44:36.352384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.352402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.352415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.352433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.352447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:21.111 [2024-11-08 07:44:36.352465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.111 [2024-11-08 07:44:36.352478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.352496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.352509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.352528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.352550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.352568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.352581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.352601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.361602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.361646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.361668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.361698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.361719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.361749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.361769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.361798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.361819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.361849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.361869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.361899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.361920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.361949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.361969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.362036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.362086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.362153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.362204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.362255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.362305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.362357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.362407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.362457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.362507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.362558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.362608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.362670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.362720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.362770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.362831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.362880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.362931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.362960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.362993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.363022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.363043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.363072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.363092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.363121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.363142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.363172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.363193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.363223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.363245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.363275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.363296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.363326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.112 [2024-11-08 07:44:36.363348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.363376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.363397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:21.112 [2024-11-08 07:44:36.363435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.112 [2024-11-08 07:44:36.363457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:21.113 [2024-11-08 07:44:36.363486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.113 [2024-11-08 07:44:36.363507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:21.113 [2024-11-08 07:44:36.363535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.113 [2024-11-08 07:44:36.363556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:21.113 [2024-11-08 07:44:36.363585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.113 [2024-11-08 07:44:36.363605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:21.113 [2024-11-08 07:44:36.363635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.113 [2024-11-08 07:44:36.363656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:21.113 [2024-11-08 07:44:36.363684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.113 [2024-11-08 07:44:36.363705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:21.113 [2024-11-08 07:44:36.363735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.113 [2024-11-08 07:44:36.363756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:21.113 [2024-11-08 07:44:36.363785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:21.113 [2024-11-08 07:44:36.363806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:21.113 8780.00 IOPS, 34.30 MiB/s [2024-11-08T07:44:39.074Z] 8842.80 IOPS, 34.54 MiB/s [2024-11-08T07:44:39.074Z] Received shutdown signal, test time was about 30.353680 seconds 00:17:21.113 00:17:21.113 Latency(us) 00:17:21.113 [2024-11-08T07:44:39.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.113 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:21.113 Verification LBA range: start 0x0 length 0x4000 00:17:21.113 Nvme0n1 : 30.35 8862.25 34.62 0.00 0.00 14423.87 145.31 4026531.84 00:17:21.113 [2024-11-08T07:44:39.074Z] =================================================================================================================== 00:17:21.113 [2024-11-08T07:44:39.074Z] Total : 8862.25 34.62 0.00 0.00 14423.87 145.31 4026531.84 00:17:21.113 07:44:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:21.372 rmmod nvme_tcp 00:17:21.372 rmmod nvme_fabrics 00:17:21.372 rmmod nvme_keyring 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76146 ']' 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76146 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 76146 ']' 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 76146 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76146 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:21.372 killing process with pid 76146 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76146' 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 76146 00:17:21.372 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 76146 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:21.631 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:17:21.889 00:17:21.889 real 0m36.457s 00:17:21.889 user 1m52.490s 00:17:21.889 sys 0m13.388s 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:21.889 ************************************ 00:17:21.889 END TEST nvmf_host_multipath_status 00:17:21.889 ************************************ 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.889 ************************************ 00:17:21.889 START TEST nvmf_discovery_remove_ifc 00:17:21.889 ************************************ 00:17:21.889 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:22.149 * Looking for test storage... 00:17:22.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:17:22.149 07:44:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:22.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.149 --rc genhtml_branch_coverage=1 00:17:22.149 --rc genhtml_function_coverage=1 00:17:22.149 --rc genhtml_legend=1 00:17:22.149 --rc geninfo_all_blocks=1 00:17:22.149 --rc geninfo_unexecuted_blocks=1 00:17:22.149 00:17:22.149 ' 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:22.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.149 --rc genhtml_branch_coverage=1 00:17:22.149 --rc genhtml_function_coverage=1 00:17:22.149 --rc genhtml_legend=1 00:17:22.149 --rc geninfo_all_blocks=1 00:17:22.149 --rc geninfo_unexecuted_blocks=1 00:17:22.149 00:17:22.149 ' 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:22.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.149 --rc genhtml_branch_coverage=1 00:17:22.149 --rc genhtml_function_coverage=1 00:17:22.149 --rc genhtml_legend=1 00:17:22.149 --rc geninfo_all_blocks=1 00:17:22.149 --rc geninfo_unexecuted_blocks=1 00:17:22.149 00:17:22.149 ' 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:22.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.149 --rc genhtml_branch_coverage=1 00:17:22.149 --rc genhtml_function_coverage=1 00:17:22.149 --rc genhtml_legend=1 00:17:22.149 --rc geninfo_all_blocks=1 00:17:22.149 --rc geninfo_unexecuted_blocks=1 00:17:22.149 00:17:22.149 ' 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.149 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:22.150 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:22.150 Cannot find device "nvmf_init_br" 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:22.150 Cannot find device "nvmf_init_br2" 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:22.150 Cannot find device "nvmf_tgt_br" 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:17:22.150 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:22.150 Cannot find device "nvmf_tgt_br2" 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:22.409 Cannot find device "nvmf_init_br" 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:22.409 Cannot find device "nvmf_init_br2" 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:22.409 Cannot find device "nvmf_tgt_br" 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:22.409 Cannot find device "nvmf_tgt_br2" 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:22.409 Cannot find device "nvmf_br" 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:22.409 Cannot find device "nvmf_init_if" 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:22.409 Cannot find device "nvmf_init_if2" 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:22.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:22.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:22.409 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:22.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:22.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:17:22.669 00:17:22.669 --- 10.0.0.3 ping statistics --- 00:17:22.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.669 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:22.669 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:22.669 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:17:22.669 00:17:22.669 --- 10.0.0.4 ping statistics --- 00:17:22.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.669 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:22.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:22.669 00:17:22.669 --- 10.0.0.1 ping statistics --- 00:17:22.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.669 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:22.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:17:22.669 00:17:22.669 --- 10.0.0.2 ping statistics --- 00:17:22.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.669 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77016 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77016 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 77016 ']' 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:22.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:22.669 07:44:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:22.669 [2024-11-08 07:44:40.537634] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:17:22.669 [2024-11-08 07:44:40.538353] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.928 [2024-11-08 07:44:40.699300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.928 [2024-11-08 07:44:40.764208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.928 [2024-11-08 07:44:40.764280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.928 [2024-11-08 07:44:40.764295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.928 [2024-11-08 07:44:40.764309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.928 [2024-11-08 07:44:40.764320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.928 [2024-11-08 07:44:40.764710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.928 [2024-11-08 07:44:40.822200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:23.497 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:23.497 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:17:23.497 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:23.497 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:23.497 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:23.757 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.757 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:23.757 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.757 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:23.757 [2024-11-08 07:44:41.494155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.757 [2024-11-08 07:44:41.502286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:23.757 null0 00:17:23.757 [2024-11-08 07:44:41.534205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:23.757 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.757 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77045 00:17:23.757 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:23.757 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77045 /tmp/host.sock 00:17:23.757 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 77045 ']' 00:17:23.757 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:17:23.757 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:23.757 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:23.757 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:23.757 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:23.757 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:23.757 [2024-11-08 07:44:41.592776] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:17:23.757 [2024-11-08 07:44:41.592835] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77045 ] 00:17:24.016 [2024-11-08 07:44:41.731826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.016 [2024-11-08 07:44:41.777058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.016 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:24.016 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:17:24.016 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:24.016 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:24.016 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.016 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:24.016 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.016 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:24.016 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.016 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:24.016 [2024-11-08 07:44:41.903242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:24.016 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.016 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:24.016 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.016 07:44:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:25.393 [2024-11-08 07:44:42.946871] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:25.393 [2024-11-08 07:44:42.946909] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:25.393 [2024-11-08 07:44:42.946929] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:25.393 [2024-11-08 07:44:42.952908] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:25.393 [2024-11-08 07:44:43.007293] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:17:25.393 [2024-11-08 07:44:43.008414] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f63fb0:1 started. 00:17:25.393 [2024-11-08 07:44:43.010290] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:25.393 [2024-11-08 07:44:43.010348] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:25.393 [2024-11-08 07:44:43.010368] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:25.393 [2024-11-08 07:44:43.010386] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:25.393 [2024-11-08 07:44:43.010415] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:25.393 [2024-11-08 07:44:43.015610] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f63fb0 was disconnected and freed. delete nvme_qpair. 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:25.393 07:44:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:26.330 07:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:26.330 07:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:26.330 07:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:26.330 07:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:26.330 07:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.330 07:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:26.330 07:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:26.330 07:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.330 07:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:26.330 07:44:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:27.292 07:44:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:27.292 07:44:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:27.292 07:44:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:27.292 07:44:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.292 07:44:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:27.292 07:44:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:27.292 07:44:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:27.292 07:44:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.292 07:44:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:27.292 07:44:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:28.668 07:44:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:28.668 07:44:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:28.668 07:44:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:28.668 07:44:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:28.668 07:44:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.668 07:44:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:28.668 07:44:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:28.668 07:44:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.668 07:44:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:28.668 07:44:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:29.603 07:44:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:29.603 07:44:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:29.603 07:44:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.603 07:44:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:29.603 07:44:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:29.603 07:44:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:29.603 07:44:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:29.603 07:44:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.603 07:44:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:29.603 07:44:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:30.537 07:44:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:30.537 07:44:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:30.537 07:44:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:30.537 07:44:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.537 07:44:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:30.537 07:44:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:30.537 07:44:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:30.537 07:44:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.537 07:44:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:30.537 07:44:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:30.537 [2024-11-08 07:44:48.437776] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:30.537 [2024-11-08 07:44:48.437845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.537 [2024-11-08 07:44:48.437862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.537 [2024-11-08 07:44:48.437875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.538 [2024-11-08 07:44:48.437884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.538 [2024-11-08 07:44:48.437893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.538 [2024-11-08 07:44:48.437902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.538 [2024-11-08 07:44:48.437912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.538 [2024-11-08 07:44:48.437921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.538 [2024-11-08 07:44:48.437932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.538 [2024-11-08 07:44:48.437941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.538 [2024-11-08 07:44:48.437949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40240 is same with the state(6) to be set 00:17:30.538 [2024-11-08 07:44:48.447763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f40240 (9): Bad file descriptor 00:17:30.538 [2024-11-08 07:44:48.457797] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:17:30.538 [2024-11-08 07:44:48.457814] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:17:30.538 [2024-11-08 07:44:48.457824] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:30.538 [2024-11-08 07:44:48.457831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:30.538 [2024-11-08 07:44:48.457909] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:31.474 07:44:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:31.474 07:44:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:31.474 07:44:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:31.474 07:44:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.474 07:44:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:31.474 07:44:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:31.474 07:44:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:31.733 [2024-11-08 07:44:49.495102] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:31.733 [2024-11-08 07:44:49.495247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f40240 with addr=10.0.0.3, port=4420 00:17:31.733 [2024-11-08 07:44:49.495295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f40240 is same with the state(6) to be set 00:17:31.733 [2024-11-08 07:44:49.495419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f40240 (9): Bad file descriptor 00:17:31.733 [2024-11-08 07:44:49.496521] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:17:31.733 [2024-11-08 07:44:49.496630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:31.733 [2024-11-08 07:44:49.496663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:31.733 [2024-11-08 07:44:49.496743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:31.733 [2024-11-08 07:44:49.496782] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:31.733 [2024-11-08 07:44:49.496821] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:31.733 [2024-11-08 07:44:49.496846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:31.733 [2024-11-08 07:44:49.496883] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:31.733 [2024-11-08 07:44:49.496902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:31.733 07:44:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.733 07:44:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:31.733 07:44:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:32.704 [2024-11-08 07:44:50.497040] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:32.704 [2024-11-08 07:44:50.497087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:32.704 [2024-11-08 07:44:50.497112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:32.704 [2024-11-08 07:44:50.497123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:32.704 [2024-11-08 07:44:50.497133] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:17:32.704 [2024-11-08 07:44:50.497143] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:32.704 [2024-11-08 07:44:50.497151] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:32.704 [2024-11-08 07:44:50.497157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:32.704 [2024-11-08 07:44:50.497240] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:17:32.704 [2024-11-08 07:44:50.497296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.704 [2024-11-08 07:44:50.497311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.704 [2024-11-08 07:44:50.497326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.704 [2024-11-08 07:44:50.497337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.704 [2024-11-08 07:44:50.497347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.704 [2024-11-08 07:44:50.497356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.705 [2024-11-08 07:44:50.497366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.705 [2024-11-08 07:44:50.497375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.705 [2024-11-08 07:44:50.497385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.705 [2024-11-08 07:44:50.497394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.705 [2024-11-08 07:44:50.497404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:17:32.705 [2024-11-08 07:44:50.497799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecba20 (9): Bad file descriptor 00:17:32.705 [2024-11-08 07:44:50.498825] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:32.705 [2024-11-08 07:44:50.498846] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:32.705 07:44:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:34.082 07:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:34.082 07:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:34.082 07:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:34.082 07:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.082 07:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:34.082 07:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:34.082 07:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:34.082 07:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.082 07:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:34.082 07:44:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:34.649 [2024-11-08 07:44:52.510971] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:34.649 [2024-11-08 07:44:52.511012] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:34.649 [2024-11-08 07:44:52.511036] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:34.649 [2024-11-08 07:44:52.517018] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:17:34.649 [2024-11-08 07:44:52.571332] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:17:34.649 [2024-11-08 07:44:52.572476] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1f6c290:1 started. 00:17:34.649 [2024-11-08 07:44:52.574004] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:34.649 [2024-11-08 07:44:52.574192] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:34.649 [2024-11-08 07:44:52.574257] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:34.649 [2024-11-08 07:44:52.574387] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:17:34.649 [2024-11-08 07:44:52.574487] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:34.649 [2024-11-08 07:44:52.579665] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1f6c290 was disconnected and freed. delete nvme_qpair. 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77045 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 77045 ']' 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 77045 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77045 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77045' 00:17:34.909 killing process with pid 77045 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 77045 00:17:34.909 07:44:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 77045 00:17:35.168 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:35.168 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:35.168 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:35.427 rmmod nvme_tcp 00:17:35.427 rmmod nvme_fabrics 00:17:35.427 rmmod nvme_keyring 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77016 ']' 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77016 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 77016 ']' 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 77016 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77016 00:17:35.427 killing process with pid 77016 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77016' 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 77016 00:17:35.427 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 77016 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.686 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.944 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:17:35.944 00:17:35.944 real 0m13.868s 00:17:35.944 user 0m22.347s 00:17:35.944 sys 0m3.400s 00:17:35.944 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:35.944 07:44:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:35.944 ************************************ 00:17:35.944 END TEST nvmf_discovery_remove_ifc 00:17:35.944 ************************************ 00:17:35.944 07:44:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:35.944 07:44:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:35.944 07:44:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:35.944 07:44:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.944 ************************************ 00:17:35.944 START TEST nvmf_identify_kernel_target 00:17:35.944 ************************************ 00:17:35.944 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:35.944 * Looking for test storage... 00:17:35.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:35.944 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:35.944 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:17:35.944 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:36.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.216 --rc genhtml_branch_coverage=1 00:17:36.216 --rc genhtml_function_coverage=1 00:17:36.216 --rc genhtml_legend=1 00:17:36.216 --rc geninfo_all_blocks=1 00:17:36.216 --rc geninfo_unexecuted_blocks=1 00:17:36.216 00:17:36.216 ' 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:36.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.216 --rc genhtml_branch_coverage=1 00:17:36.216 --rc genhtml_function_coverage=1 00:17:36.216 --rc genhtml_legend=1 00:17:36.216 --rc geninfo_all_blocks=1 00:17:36.216 --rc geninfo_unexecuted_blocks=1 00:17:36.216 00:17:36.216 ' 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:36.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.216 --rc genhtml_branch_coverage=1 00:17:36.216 --rc genhtml_function_coverage=1 00:17:36.216 --rc genhtml_legend=1 00:17:36.216 --rc geninfo_all_blocks=1 00:17:36.216 --rc geninfo_unexecuted_blocks=1 00:17:36.216 00:17:36.216 ' 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:36.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.216 --rc genhtml_branch_coverage=1 00:17:36.216 --rc genhtml_function_coverage=1 00:17:36.216 --rc genhtml_legend=1 00:17:36.216 --rc geninfo_all_blocks=1 00:17:36.216 --rc geninfo_unexecuted_blocks=1 00:17:36.216 00:17:36.216 ' 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.216 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.217 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:36.217 Cannot find device "nvmf_init_br" 00:17:36.217 07:44:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:36.217 Cannot find device "nvmf_init_br2" 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:36.217 Cannot find device "nvmf_tgt_br" 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:36.217 Cannot find device "nvmf_tgt_br2" 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:36.217 Cannot find device "nvmf_init_br" 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:36.217 Cannot find device "nvmf_init_br2" 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:36.217 Cannot find device "nvmf_tgt_br" 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:36.217 Cannot find device "nvmf_tgt_br2" 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:36.217 Cannot find device "nvmf_br" 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:36.217 Cannot find device "nvmf_init_if" 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:36.217 Cannot find device "nvmf_init_if2" 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:36.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:36.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:36.217 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:36.476 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:36.476 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:17:36.476 00:17:36.476 --- 10.0.0.3 ping statistics --- 00:17:36.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.476 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:36.476 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:36.476 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:17:36.476 00:17:36.476 --- 10.0.0.4 ping statistics --- 00:17:36.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.476 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:36.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:36.476 00:17:36.476 --- 10.0.0.1 ping statistics --- 00:17:36.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.476 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:36.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:17:36.476 00:17:36.476 --- 10.0.0.2 ping statistics --- 00:17:36.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.476 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:36.476 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.477 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:36.477 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:36.477 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.477 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:36.477 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:36.735 07:44:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:36.993 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:36.993 Waiting for block devices as requested 00:17:37.251 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:37.251 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:37.510 No valid GPT data, bailing 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:37.510 No valid GPT data, bailing 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:37.510 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:37.510 No valid GPT data, bailing 00:17:37.769 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:37.770 No valid GPT data, bailing 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid=b4f53fcb-853f-493d-bd98-9a37948dacaf -a 10.0.0.1 -t tcp -s 4420 00:17:37.770 00:17:37.770 Discovery Log Number of Records 2, Generation counter 2 00:17:37.770 =====Discovery Log Entry 0====== 00:17:37.770 trtype: tcp 00:17:37.770 adrfam: ipv4 00:17:37.770 subtype: current discovery subsystem 00:17:37.770 treq: not specified, sq flow control disable supported 00:17:37.770 portid: 1 00:17:37.770 trsvcid: 4420 00:17:37.770 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:37.770 traddr: 10.0.0.1 00:17:37.770 eflags: none 00:17:37.770 sectype: none 00:17:37.770 =====Discovery Log Entry 1====== 00:17:37.770 trtype: tcp 00:17:37.770 adrfam: ipv4 00:17:37.770 subtype: nvme subsystem 00:17:37.770 treq: not specified, sq flow control disable supported 00:17:37.770 portid: 1 00:17:37.770 trsvcid: 4420 00:17:37.770 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:37.770 traddr: 10.0.0.1 00:17:37.770 eflags: none 00:17:37.770 sectype: none 00:17:37.770 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:37.770 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:38.029 ===================================================== 00:17:38.029 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:38.029 ===================================================== 00:17:38.029 Controller Capabilities/Features 00:17:38.029 ================================ 00:17:38.029 Vendor ID: 0000 00:17:38.029 Subsystem Vendor ID: 0000 00:17:38.029 Serial Number: 38f8f1142b00cce40300 00:17:38.029 Model Number: Linux 00:17:38.029 Firmware Version: 6.8.9-20 00:17:38.029 Recommended Arb Burst: 0 00:17:38.029 IEEE OUI Identifier: 00 00 00 00:17:38.029 Multi-path I/O 00:17:38.029 May have multiple subsystem ports: No 00:17:38.029 May have multiple controllers: No 00:17:38.029 Associated with SR-IOV VF: No 00:17:38.029 Max Data Transfer Size: Unlimited 00:17:38.029 Max Number of Namespaces: 0 00:17:38.029 Max Number of I/O Queues: 1024 00:17:38.029 NVMe Specification Version (VS): 1.3 00:17:38.029 NVMe Specification Version (Identify): 1.3 00:17:38.029 Maximum Queue Entries: 1024 00:17:38.029 Contiguous Queues Required: No 00:17:38.029 Arbitration Mechanisms Supported 00:17:38.029 Weighted Round Robin: Not Supported 00:17:38.029 Vendor Specific: Not Supported 00:17:38.029 Reset Timeout: 7500 ms 00:17:38.029 Doorbell Stride: 4 bytes 00:17:38.029 NVM Subsystem Reset: Not Supported 00:17:38.029 Command Sets Supported 00:17:38.029 NVM Command Set: Supported 00:17:38.029 Boot Partition: Not Supported 00:17:38.029 Memory Page Size Minimum: 4096 bytes 00:17:38.029 Memory Page Size Maximum: 4096 bytes 00:17:38.029 Persistent Memory Region: Not Supported 00:17:38.029 Optional Asynchronous Events Supported 00:17:38.029 Namespace Attribute Notices: Not Supported 00:17:38.029 Firmware Activation Notices: Not Supported 00:17:38.029 ANA Change Notices: Not Supported 00:17:38.029 PLE Aggregate Log Change Notices: Not Supported 00:17:38.029 LBA Status Info Alert Notices: Not Supported 00:17:38.029 EGE Aggregate Log Change Notices: Not Supported 00:17:38.029 Normal NVM Subsystem Shutdown event: Not Supported 00:17:38.029 Zone Descriptor Change Notices: Not Supported 00:17:38.029 Discovery Log Change Notices: Supported 00:17:38.029 Controller Attributes 00:17:38.029 128-bit Host Identifier: Not Supported 00:17:38.029 Non-Operational Permissive Mode: Not Supported 00:17:38.029 NVM Sets: Not Supported 00:17:38.029 Read Recovery Levels: Not Supported 00:17:38.029 Endurance Groups: Not Supported 00:17:38.029 Predictable Latency Mode: Not Supported 00:17:38.029 Traffic Based Keep ALive: Not Supported 00:17:38.029 Namespace Granularity: Not Supported 00:17:38.029 SQ Associations: Not Supported 00:17:38.029 UUID List: Not Supported 00:17:38.029 Multi-Domain Subsystem: Not Supported 00:17:38.029 Fixed Capacity Management: Not Supported 00:17:38.029 Variable Capacity Management: Not Supported 00:17:38.029 Delete Endurance Group: Not Supported 00:17:38.029 Delete NVM Set: Not Supported 00:17:38.029 Extended LBA Formats Supported: Not Supported 00:17:38.029 Flexible Data Placement Supported: Not Supported 00:17:38.029 00:17:38.029 Controller Memory Buffer Support 00:17:38.029 ================================ 00:17:38.029 Supported: No 00:17:38.029 00:17:38.029 Persistent Memory Region Support 00:17:38.029 ================================ 00:17:38.029 Supported: No 00:17:38.029 00:17:38.029 Admin Command Set Attributes 00:17:38.029 ============================ 00:17:38.029 Security Send/Receive: Not Supported 00:17:38.029 Format NVM: Not Supported 00:17:38.029 Firmware Activate/Download: Not Supported 00:17:38.029 Namespace Management: Not Supported 00:17:38.029 Device Self-Test: Not Supported 00:17:38.029 Directives: Not Supported 00:17:38.029 NVMe-MI: Not Supported 00:17:38.029 Virtualization Management: Not Supported 00:17:38.029 Doorbell Buffer Config: Not Supported 00:17:38.029 Get LBA Status Capability: Not Supported 00:17:38.029 Command & Feature Lockdown Capability: Not Supported 00:17:38.029 Abort Command Limit: 1 00:17:38.029 Async Event Request Limit: 1 00:17:38.029 Number of Firmware Slots: N/A 00:17:38.029 Firmware Slot 1 Read-Only: N/A 00:17:38.029 Firmware Activation Without Reset: N/A 00:17:38.029 Multiple Update Detection Support: N/A 00:17:38.029 Firmware Update Granularity: No Information Provided 00:17:38.029 Per-Namespace SMART Log: No 00:17:38.029 Asymmetric Namespace Access Log Page: Not Supported 00:17:38.029 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:38.029 Command Effects Log Page: Not Supported 00:17:38.029 Get Log Page Extended Data: Supported 00:17:38.029 Telemetry Log Pages: Not Supported 00:17:38.029 Persistent Event Log Pages: Not Supported 00:17:38.029 Supported Log Pages Log Page: May Support 00:17:38.029 Commands Supported & Effects Log Page: Not Supported 00:17:38.029 Feature Identifiers & Effects Log Page:May Support 00:17:38.029 NVMe-MI Commands & Effects Log Page: May Support 00:17:38.029 Data Area 4 for Telemetry Log: Not Supported 00:17:38.029 Error Log Page Entries Supported: 1 00:17:38.029 Keep Alive: Not Supported 00:17:38.029 00:17:38.029 NVM Command Set Attributes 00:17:38.029 ========================== 00:17:38.029 Submission Queue Entry Size 00:17:38.029 Max: 1 00:17:38.029 Min: 1 00:17:38.029 Completion Queue Entry Size 00:17:38.029 Max: 1 00:17:38.029 Min: 1 00:17:38.029 Number of Namespaces: 0 00:17:38.029 Compare Command: Not Supported 00:17:38.029 Write Uncorrectable Command: Not Supported 00:17:38.029 Dataset Management Command: Not Supported 00:17:38.029 Write Zeroes Command: Not Supported 00:17:38.029 Set Features Save Field: Not Supported 00:17:38.029 Reservations: Not Supported 00:17:38.029 Timestamp: Not Supported 00:17:38.029 Copy: Not Supported 00:17:38.029 Volatile Write Cache: Not Present 00:17:38.030 Atomic Write Unit (Normal): 1 00:17:38.030 Atomic Write Unit (PFail): 1 00:17:38.030 Atomic Compare & Write Unit: 1 00:17:38.030 Fused Compare & Write: Not Supported 00:17:38.030 Scatter-Gather List 00:17:38.030 SGL Command Set: Supported 00:17:38.030 SGL Keyed: Not Supported 00:17:38.030 SGL Bit Bucket Descriptor: Not Supported 00:17:38.030 SGL Metadata Pointer: Not Supported 00:17:38.030 Oversized SGL: Not Supported 00:17:38.030 SGL Metadata Address: Not Supported 00:17:38.030 SGL Offset: Supported 00:17:38.030 Transport SGL Data Block: Not Supported 00:17:38.030 Replay Protected Memory Block: Not Supported 00:17:38.030 00:17:38.030 Firmware Slot Information 00:17:38.030 ========================= 00:17:38.030 Active slot: 0 00:17:38.030 00:17:38.030 00:17:38.030 Error Log 00:17:38.030 ========= 00:17:38.030 00:17:38.030 Active Namespaces 00:17:38.030 ================= 00:17:38.030 Discovery Log Page 00:17:38.030 ================== 00:17:38.030 Generation Counter: 2 00:17:38.030 Number of Records: 2 00:17:38.030 Record Format: 0 00:17:38.030 00:17:38.030 Discovery Log Entry 0 00:17:38.030 ---------------------- 00:17:38.030 Transport Type: 3 (TCP) 00:17:38.030 Address Family: 1 (IPv4) 00:17:38.030 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:38.030 Entry Flags: 00:17:38.030 Duplicate Returned Information: 0 00:17:38.030 Explicit Persistent Connection Support for Discovery: 0 00:17:38.030 Transport Requirements: 00:17:38.030 Secure Channel: Not Specified 00:17:38.030 Port ID: 1 (0x0001) 00:17:38.030 Controller ID: 65535 (0xffff) 00:17:38.030 Admin Max SQ Size: 32 00:17:38.030 Transport Service Identifier: 4420 00:17:38.030 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:38.030 Transport Address: 10.0.0.1 00:17:38.030 Discovery Log Entry 1 00:17:38.030 ---------------------- 00:17:38.030 Transport Type: 3 (TCP) 00:17:38.030 Address Family: 1 (IPv4) 00:17:38.030 Subsystem Type: 2 (NVM Subsystem) 00:17:38.030 Entry Flags: 00:17:38.030 Duplicate Returned Information: 0 00:17:38.030 Explicit Persistent Connection Support for Discovery: 0 00:17:38.030 Transport Requirements: 00:17:38.030 Secure Channel: Not Specified 00:17:38.030 Port ID: 1 (0x0001) 00:17:38.030 Controller ID: 65535 (0xffff) 00:17:38.030 Admin Max SQ Size: 32 00:17:38.030 Transport Service Identifier: 4420 00:17:38.030 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:38.030 Transport Address: 10.0.0.1 00:17:38.030 07:44:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:38.289 get_feature(0x01) failed 00:17:38.289 get_feature(0x02) failed 00:17:38.289 get_feature(0x04) failed 00:17:38.289 ===================================================== 00:17:38.289 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:38.289 ===================================================== 00:17:38.289 Controller Capabilities/Features 00:17:38.289 ================================ 00:17:38.289 Vendor ID: 0000 00:17:38.289 Subsystem Vendor ID: 0000 00:17:38.289 Serial Number: a30ee0750cebabfc6fef 00:17:38.289 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:38.289 Firmware Version: 6.8.9-20 00:17:38.289 Recommended Arb Burst: 6 00:17:38.289 IEEE OUI Identifier: 00 00 00 00:17:38.289 Multi-path I/O 00:17:38.289 May have multiple subsystem ports: Yes 00:17:38.289 May have multiple controllers: Yes 00:17:38.289 Associated with SR-IOV VF: No 00:17:38.289 Max Data Transfer Size: Unlimited 00:17:38.289 Max Number of Namespaces: 1024 00:17:38.289 Max Number of I/O Queues: 128 00:17:38.289 NVMe Specification Version (VS): 1.3 00:17:38.289 NVMe Specification Version (Identify): 1.3 00:17:38.289 Maximum Queue Entries: 1024 00:17:38.289 Contiguous Queues Required: No 00:17:38.289 Arbitration Mechanisms Supported 00:17:38.289 Weighted Round Robin: Not Supported 00:17:38.289 Vendor Specific: Not Supported 00:17:38.289 Reset Timeout: 7500 ms 00:17:38.289 Doorbell Stride: 4 bytes 00:17:38.289 NVM Subsystem Reset: Not Supported 00:17:38.289 Command Sets Supported 00:17:38.289 NVM Command Set: Supported 00:17:38.289 Boot Partition: Not Supported 00:17:38.289 Memory Page Size Minimum: 4096 bytes 00:17:38.289 Memory Page Size Maximum: 4096 bytes 00:17:38.289 Persistent Memory Region: Not Supported 00:17:38.289 Optional Asynchronous Events Supported 00:17:38.289 Namespace Attribute Notices: Supported 00:17:38.289 Firmware Activation Notices: Not Supported 00:17:38.289 ANA Change Notices: Supported 00:17:38.289 PLE Aggregate Log Change Notices: Not Supported 00:17:38.289 LBA Status Info Alert Notices: Not Supported 00:17:38.289 EGE Aggregate Log Change Notices: Not Supported 00:17:38.289 Normal NVM Subsystem Shutdown event: Not Supported 00:17:38.289 Zone Descriptor Change Notices: Not Supported 00:17:38.289 Discovery Log Change Notices: Not Supported 00:17:38.289 Controller Attributes 00:17:38.289 128-bit Host Identifier: Supported 00:17:38.289 Non-Operational Permissive Mode: Not Supported 00:17:38.289 NVM Sets: Not Supported 00:17:38.289 Read Recovery Levels: Not Supported 00:17:38.289 Endurance Groups: Not Supported 00:17:38.289 Predictable Latency Mode: Not Supported 00:17:38.289 Traffic Based Keep ALive: Supported 00:17:38.289 Namespace Granularity: Not Supported 00:17:38.289 SQ Associations: Not Supported 00:17:38.289 UUID List: Not Supported 00:17:38.289 Multi-Domain Subsystem: Not Supported 00:17:38.289 Fixed Capacity Management: Not Supported 00:17:38.289 Variable Capacity Management: Not Supported 00:17:38.289 Delete Endurance Group: Not Supported 00:17:38.289 Delete NVM Set: Not Supported 00:17:38.289 Extended LBA Formats Supported: Not Supported 00:17:38.289 Flexible Data Placement Supported: Not Supported 00:17:38.289 00:17:38.289 Controller Memory Buffer Support 00:17:38.289 ================================ 00:17:38.289 Supported: No 00:17:38.289 00:17:38.289 Persistent Memory Region Support 00:17:38.289 ================================ 00:17:38.289 Supported: No 00:17:38.289 00:17:38.289 Admin Command Set Attributes 00:17:38.289 ============================ 00:17:38.289 Security Send/Receive: Not Supported 00:17:38.289 Format NVM: Not Supported 00:17:38.289 Firmware Activate/Download: Not Supported 00:17:38.289 Namespace Management: Not Supported 00:17:38.289 Device Self-Test: Not Supported 00:17:38.289 Directives: Not Supported 00:17:38.289 NVMe-MI: Not Supported 00:17:38.289 Virtualization Management: Not Supported 00:17:38.289 Doorbell Buffer Config: Not Supported 00:17:38.289 Get LBA Status Capability: Not Supported 00:17:38.289 Command & Feature Lockdown Capability: Not Supported 00:17:38.289 Abort Command Limit: 4 00:17:38.289 Async Event Request Limit: 4 00:17:38.289 Number of Firmware Slots: N/A 00:17:38.289 Firmware Slot 1 Read-Only: N/A 00:17:38.289 Firmware Activation Without Reset: N/A 00:17:38.289 Multiple Update Detection Support: N/A 00:17:38.289 Firmware Update Granularity: No Information Provided 00:17:38.289 Per-Namespace SMART Log: Yes 00:17:38.289 Asymmetric Namespace Access Log Page: Supported 00:17:38.289 ANA Transition Time : 10 sec 00:17:38.289 00:17:38.289 Asymmetric Namespace Access Capabilities 00:17:38.289 ANA Optimized State : Supported 00:17:38.289 ANA Non-Optimized State : Supported 00:17:38.289 ANA Inaccessible State : Supported 00:17:38.289 ANA Persistent Loss State : Supported 00:17:38.289 ANA Change State : Supported 00:17:38.289 ANAGRPID is not changed : No 00:17:38.289 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:38.289 00:17:38.289 ANA Group Identifier Maximum : 128 00:17:38.289 Number of ANA Group Identifiers : 128 00:17:38.289 Max Number of Allowed Namespaces : 1024 00:17:38.289 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:38.289 Command Effects Log Page: Supported 00:17:38.290 Get Log Page Extended Data: Supported 00:17:38.290 Telemetry Log Pages: Not Supported 00:17:38.290 Persistent Event Log Pages: Not Supported 00:17:38.290 Supported Log Pages Log Page: May Support 00:17:38.290 Commands Supported & Effects Log Page: Not Supported 00:17:38.290 Feature Identifiers & Effects Log Page:May Support 00:17:38.290 NVMe-MI Commands & Effects Log Page: May Support 00:17:38.290 Data Area 4 for Telemetry Log: Not Supported 00:17:38.290 Error Log Page Entries Supported: 128 00:17:38.290 Keep Alive: Supported 00:17:38.290 Keep Alive Granularity: 1000 ms 00:17:38.290 00:17:38.290 NVM Command Set Attributes 00:17:38.290 ========================== 00:17:38.290 Submission Queue Entry Size 00:17:38.290 Max: 64 00:17:38.290 Min: 64 00:17:38.290 Completion Queue Entry Size 00:17:38.290 Max: 16 00:17:38.290 Min: 16 00:17:38.290 Number of Namespaces: 1024 00:17:38.290 Compare Command: Not Supported 00:17:38.290 Write Uncorrectable Command: Not Supported 00:17:38.290 Dataset Management Command: Supported 00:17:38.290 Write Zeroes Command: Supported 00:17:38.290 Set Features Save Field: Not Supported 00:17:38.290 Reservations: Not Supported 00:17:38.290 Timestamp: Not Supported 00:17:38.290 Copy: Not Supported 00:17:38.290 Volatile Write Cache: Present 00:17:38.290 Atomic Write Unit (Normal): 1 00:17:38.290 Atomic Write Unit (PFail): 1 00:17:38.290 Atomic Compare & Write Unit: 1 00:17:38.290 Fused Compare & Write: Not Supported 00:17:38.290 Scatter-Gather List 00:17:38.290 SGL Command Set: Supported 00:17:38.290 SGL Keyed: Not Supported 00:17:38.290 SGL Bit Bucket Descriptor: Not Supported 00:17:38.290 SGL Metadata Pointer: Not Supported 00:17:38.290 Oversized SGL: Not Supported 00:17:38.290 SGL Metadata Address: Not Supported 00:17:38.290 SGL Offset: Supported 00:17:38.290 Transport SGL Data Block: Not Supported 00:17:38.290 Replay Protected Memory Block: Not Supported 00:17:38.290 00:17:38.290 Firmware Slot Information 00:17:38.290 ========================= 00:17:38.290 Active slot: 0 00:17:38.290 00:17:38.290 Asymmetric Namespace Access 00:17:38.290 =========================== 00:17:38.290 Change Count : 0 00:17:38.290 Number of ANA Group Descriptors : 1 00:17:38.290 ANA Group Descriptor : 0 00:17:38.290 ANA Group ID : 1 00:17:38.290 Number of NSID Values : 1 00:17:38.290 Change Count : 0 00:17:38.290 ANA State : 1 00:17:38.290 Namespace Identifier : 1 00:17:38.290 00:17:38.290 Commands Supported and Effects 00:17:38.290 ============================== 00:17:38.290 Admin Commands 00:17:38.290 -------------- 00:17:38.290 Get Log Page (02h): Supported 00:17:38.290 Identify (06h): Supported 00:17:38.290 Abort (08h): Supported 00:17:38.290 Set Features (09h): Supported 00:17:38.290 Get Features (0Ah): Supported 00:17:38.290 Asynchronous Event Request (0Ch): Supported 00:17:38.290 Keep Alive (18h): Supported 00:17:38.290 I/O Commands 00:17:38.290 ------------ 00:17:38.290 Flush (00h): Supported 00:17:38.290 Write (01h): Supported LBA-Change 00:17:38.290 Read (02h): Supported 00:17:38.290 Write Zeroes (08h): Supported LBA-Change 00:17:38.290 Dataset Management (09h): Supported 00:17:38.290 00:17:38.290 Error Log 00:17:38.290 ========= 00:17:38.290 Entry: 0 00:17:38.290 Error Count: 0x3 00:17:38.290 Submission Queue Id: 0x0 00:17:38.290 Command Id: 0x5 00:17:38.290 Phase Bit: 0 00:17:38.290 Status Code: 0x2 00:17:38.290 Status Code Type: 0x0 00:17:38.290 Do Not Retry: 1 00:17:38.290 Error Location: 0x28 00:17:38.290 LBA: 0x0 00:17:38.290 Namespace: 0x0 00:17:38.290 Vendor Log Page: 0x0 00:17:38.290 ----------- 00:17:38.290 Entry: 1 00:17:38.290 Error Count: 0x2 00:17:38.290 Submission Queue Id: 0x0 00:17:38.290 Command Id: 0x5 00:17:38.290 Phase Bit: 0 00:17:38.290 Status Code: 0x2 00:17:38.290 Status Code Type: 0x0 00:17:38.290 Do Not Retry: 1 00:17:38.290 Error Location: 0x28 00:17:38.290 LBA: 0x0 00:17:38.290 Namespace: 0x0 00:17:38.290 Vendor Log Page: 0x0 00:17:38.290 ----------- 00:17:38.290 Entry: 2 00:17:38.290 Error Count: 0x1 00:17:38.290 Submission Queue Id: 0x0 00:17:38.290 Command Id: 0x4 00:17:38.290 Phase Bit: 0 00:17:38.290 Status Code: 0x2 00:17:38.290 Status Code Type: 0x0 00:17:38.290 Do Not Retry: 1 00:17:38.290 Error Location: 0x28 00:17:38.290 LBA: 0x0 00:17:38.290 Namespace: 0x0 00:17:38.290 Vendor Log Page: 0x0 00:17:38.290 00:17:38.290 Number of Queues 00:17:38.290 ================ 00:17:38.290 Number of I/O Submission Queues: 128 00:17:38.290 Number of I/O Completion Queues: 128 00:17:38.290 00:17:38.290 ZNS Specific Controller Data 00:17:38.290 ============================ 00:17:38.290 Zone Append Size Limit: 0 00:17:38.290 00:17:38.290 00:17:38.290 Active Namespaces 00:17:38.290 ================= 00:17:38.290 get_feature(0x05) failed 00:17:38.290 Namespace ID:1 00:17:38.290 Command Set Identifier: NVM (00h) 00:17:38.290 Deallocate: Supported 00:17:38.290 Deallocated/Unwritten Error: Not Supported 00:17:38.290 Deallocated Read Value: Unknown 00:17:38.290 Deallocate in Write Zeroes: Not Supported 00:17:38.290 Deallocated Guard Field: 0xFFFF 00:17:38.290 Flush: Supported 00:17:38.290 Reservation: Not Supported 00:17:38.290 Namespace Sharing Capabilities: Multiple Controllers 00:17:38.290 Size (in LBAs): 1310720 (5GiB) 00:17:38.290 Capacity (in LBAs): 1310720 (5GiB) 00:17:38.290 Utilization (in LBAs): 1310720 (5GiB) 00:17:38.290 UUID: 506ed98f-22b1-4f8a-b925-925553ef1f9d 00:17:38.290 Thin Provisioning: Not Supported 00:17:38.290 Per-NS Atomic Units: Yes 00:17:38.290 Atomic Boundary Size (Normal): 0 00:17:38.290 Atomic Boundary Size (PFail): 0 00:17:38.290 Atomic Boundary Offset: 0 00:17:38.290 NGUID/EUI64 Never Reused: No 00:17:38.290 ANA group ID: 1 00:17:38.290 Namespace Write Protected: No 00:17:38.290 Number of LBA Formats: 1 00:17:38.290 Current LBA Format: LBA Format #00 00:17:38.290 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:38.290 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:38.290 rmmod nvme_tcp 00:17:38.290 rmmod nvme_fabrics 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:38.290 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:38.549 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:38.549 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:38.549 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:38.549 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:38.549 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:38.550 07:44:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:39.485 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:39.485 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:39.485 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:39.744 ************************************ 00:17:39.744 END TEST nvmf_identify_kernel_target 00:17:39.744 ************************************ 00:17:39.744 00:17:39.744 real 0m3.759s 00:17:39.744 user 0m1.236s 00:17:39.744 sys 0m1.911s 00:17:39.744 07:44:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:39.744 07:44:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.745 07:44:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:39.745 07:44:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:39.745 07:44:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:39.745 07:44:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.745 ************************************ 00:17:39.745 START TEST nvmf_auth_host 00:17:39.745 ************************************ 00:17:39.745 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:39.745 * Looking for test storage... 00:17:39.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:39.745 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:39.745 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:17:39.745 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:40.004 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:40.004 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.005 --rc genhtml_branch_coverage=1 00:17:40.005 --rc genhtml_function_coverage=1 00:17:40.005 --rc genhtml_legend=1 00:17:40.005 --rc geninfo_all_blocks=1 00:17:40.005 --rc geninfo_unexecuted_blocks=1 00:17:40.005 00:17:40.005 ' 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.005 --rc genhtml_branch_coverage=1 00:17:40.005 --rc genhtml_function_coverage=1 00:17:40.005 --rc genhtml_legend=1 00:17:40.005 --rc geninfo_all_blocks=1 00:17:40.005 --rc geninfo_unexecuted_blocks=1 00:17:40.005 00:17:40.005 ' 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.005 --rc genhtml_branch_coverage=1 00:17:40.005 --rc genhtml_function_coverage=1 00:17:40.005 --rc genhtml_legend=1 00:17:40.005 --rc geninfo_all_blocks=1 00:17:40.005 --rc geninfo_unexecuted_blocks=1 00:17:40.005 00:17:40.005 ' 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:40.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.005 --rc genhtml_branch_coverage=1 00:17:40.005 --rc genhtml_function_coverage=1 00:17:40.005 --rc genhtml_legend=1 00:17:40.005 --rc geninfo_all_blocks=1 00:17:40.005 --rc geninfo_unexecuted_blocks=1 00:17:40.005 00:17:40.005 ' 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:40.005 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:40.005 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:40.006 Cannot find device "nvmf_init_br" 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:40.006 Cannot find device "nvmf_init_br2" 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:40.006 Cannot find device "nvmf_tgt_br" 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:40.006 Cannot find device "nvmf_tgt_br2" 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:40.006 Cannot find device "nvmf_init_br" 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:40.006 Cannot find device "nvmf_init_br2" 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:40.006 Cannot find device "nvmf_tgt_br" 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:40.006 Cannot find device "nvmf_tgt_br2" 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:40.006 Cannot find device "nvmf_br" 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:17:40.006 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:40.265 Cannot find device "nvmf_init_if" 00:17:40.265 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:17:40.265 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:40.265 Cannot find device "nvmf_init_if2" 00:17:40.265 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:17:40.265 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:40.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.265 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:17:40.265 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:40.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.265 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:17:40.265 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:40.265 07:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:40.265 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:40.524 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:40.524 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:17:40.524 00:17:40.524 --- 10.0.0.3 ping statistics --- 00:17:40.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.524 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:40.524 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:40.524 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:17:40.524 00:17:40.524 --- 10.0.0.4 ping statistics --- 00:17:40.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.524 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:40.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:40.524 00:17:40.524 --- 10.0.0.1 ping statistics --- 00:17:40.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.524 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:40.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:17:40.524 00:17:40.524 --- 10.0.0.2 ping statistics --- 00:17:40.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.524 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78037 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78037 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 78037 ']' 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:40.524 07:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.462 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:41.462 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:17:41.462 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.462 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:41.462 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fc4c03425cbb6ad02b2a1a3bbafbfa74 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.TFW 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fc4c03425cbb6ad02b2a1a3bbafbfa74 0 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fc4c03425cbb6ad02b2a1a3bbafbfa74 0 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fc4c03425cbb6ad02b2a1a3bbafbfa74 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.TFW 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.TFW 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.TFW 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:41.721 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ad74339621614a7800e86fa391a5f740a85890568b68a3ec2a8f2f63b47d1529 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1n2 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ad74339621614a7800e86fa391a5f740a85890568b68a3ec2a8f2f63b47d1529 3 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ad74339621614a7800e86fa391a5f740a85890568b68a3ec2a8f2f63b47d1529 3 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ad74339621614a7800e86fa391a5f740a85890568b68a3ec2a8f2f63b47d1529 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1n2 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1n2 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.1n2 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4ea0b033f70ee3e01b72b090b20978ed3dba1e5581b05d7f 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.iHa 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4ea0b033f70ee3e01b72b090b20978ed3dba1e5581b05d7f 0 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4ea0b033f70ee3e01b72b090b20978ed3dba1e5581b05d7f 0 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4ea0b033f70ee3e01b72b090b20978ed3dba1e5581b05d7f 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.iHa 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.iHa 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.iHa 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cbd94aeb45f6fbffd8cc62138c26005e1b1abeea034201e9 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0xl 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cbd94aeb45f6fbffd8cc62138c26005e1b1abeea034201e9 2 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cbd94aeb45f6fbffd8cc62138c26005e1b1abeea034201e9 2 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cbd94aeb45f6fbffd8cc62138c26005e1b1abeea034201e9 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:41.722 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0xl 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0xl 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.0xl 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3426404dccb5d8a1026b3c012b0714f5 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7aD 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3426404dccb5d8a1026b3c012b0714f5 1 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3426404dccb5d8a1026b3c012b0714f5 1 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3426404dccb5d8a1026b3c012b0714f5 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:41.980 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7aD 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7aD 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.7aD 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d753fa3c6c29568c1672c87f5748030a 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.an8 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d753fa3c6c29568c1672c87f5748030a 1 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d753fa3c6c29568c1672c87f5748030a 1 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d753fa3c6c29568c1672c87f5748030a 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.an8 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.an8 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.an8 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f133b24c524831e6cb02d8735c850cecea69e3f13d772b9a 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Fgv 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f133b24c524831e6cb02d8735c850cecea69e3f13d772b9a 2 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f133b24c524831e6cb02d8735c850cecea69e3f13d772b9a 2 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f133b24c524831e6cb02d8735c850cecea69e3f13d772b9a 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Fgv 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Fgv 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Fgv 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b085045ecd0101c2e6660939a79ba5f2 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ipg 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b085045ecd0101c2e6660939a79ba5f2 0 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b085045ecd0101c2e6660939a79ba5f2 0 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b085045ecd0101c2e6660939a79ba5f2 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:41.981 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:42.240 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ipg 00:17:42.240 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ipg 00:17:42.240 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ipg 00:17:42.240 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:42.240 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:42.240 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:42.240 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:42.240 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:42.240 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:42.240 07:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a9e221e42b8dc0ef295f0423637e2e66869891757d75ae8aa7354381aa27698d 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.VJn 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a9e221e42b8dc0ef295f0423637e2e66869891757d75ae8aa7354381aa27698d 3 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a9e221e42b8dc0ef295f0423637e2e66869891757d75ae8aa7354381aa27698d 3 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a9e221e42b8dc0ef295f0423637e2e66869891757d75ae8aa7354381aa27698d 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.VJn 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.VJn 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.VJn 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78037 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 78037 ']' 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:42.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:42.240 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.TFW 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.1n2 ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1n2 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.iHa 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.0xl ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0xl 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.7aD 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.an8 ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.an8 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Fgv 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ipg ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ipg 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.VJn 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:42.499 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:42.758 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:42.758 07:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:43.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:43.017 Waiting for block devices as requested 00:17:43.275 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:43.275 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:43.843 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:43.843 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:43.843 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:43.843 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:17:43.843 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:43.843 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:43.843 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:43.843 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:43.843 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:44.102 No valid GPT data, bailing 00:17:44.102 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:44.102 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:44.102 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:44.103 No valid GPT data, bailing 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:44.103 No valid GPT data, bailing 00:17:44.103 07:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:44.103 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:44.103 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:44.103 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:44.103 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:44.103 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:44.103 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:44.103 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:17:44.103 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:44.103 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:44.103 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:44.103 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:44.103 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:44.362 No valid GPT data, bailing 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid=b4f53fcb-853f-493d-bd98-9a37948dacaf -a 10.0.0.1 -t tcp -s 4420 00:17:44.362 00:17:44.362 Discovery Log Number of Records 2, Generation counter 2 00:17:44.362 =====Discovery Log Entry 0====== 00:17:44.362 trtype: tcp 00:17:44.362 adrfam: ipv4 00:17:44.362 subtype: current discovery subsystem 00:17:44.362 treq: not specified, sq flow control disable supported 00:17:44.362 portid: 1 00:17:44.362 trsvcid: 4420 00:17:44.362 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:44.362 traddr: 10.0.0.1 00:17:44.362 eflags: none 00:17:44.362 sectype: none 00:17:44.362 =====Discovery Log Entry 1====== 00:17:44.362 trtype: tcp 00:17:44.362 adrfam: ipv4 00:17:44.362 subtype: nvme subsystem 00:17:44.362 treq: not specified, sq flow control disable supported 00:17:44.362 portid: 1 00:17:44.362 trsvcid: 4420 00:17:44.362 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:44.362 traddr: 10.0.0.1 00:17:44.362 eflags: none 00:17:44.362 sectype: none 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:44.362 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.363 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.622 nvme0n1 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.622 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.882 nvme0n1 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.882 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.882 nvme0n1 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.883 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.142 nvme0n1 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.142 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:45.143 07:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.143 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.402 nvme0n1 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.402 nvme0n1 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:45.402 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.971 nvme0n1 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.971 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.232 nvme0n1 00:17:46.232 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.232 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.232 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.232 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.232 07:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:46.232 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.233 nvme0n1 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.233 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.492 nvme0n1 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.492 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.493 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.752 nvme0n1 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:46.752 07:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.320 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.579 nvme0n1 00:17:47.579 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.579 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.579 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.579 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.579 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.579 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.579 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.579 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.579 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.580 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.839 nvme0n1 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.839 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.109 nvme0n1 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.109 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.110 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.110 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:48.110 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:48.110 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:48.110 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.110 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.110 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:48.110 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.110 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:48.110 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:48.110 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:48.110 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:48.110 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.110 07:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.397 nvme0n1 00:17:48.397 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.397 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.397 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.397 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.397 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.397 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.397 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.397 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.397 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.397 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.398 nvme0n1 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.398 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:48.671 07:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.055 07:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.314 nvme0n1 00:17:50.314 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.314 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.314 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.314 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.314 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.314 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.314 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.314 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.315 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.574 nvme0n1 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:50.574 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:50.833 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:50.833 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.833 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.833 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:50.833 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.833 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:50.833 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:50.833 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:50.833 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.833 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.833 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.093 nvme0n1 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.093 07:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.352 nvme0n1 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:51.352 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.353 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.612 nvme0n1 00:17:51.612 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.870 07:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.438 nvme0n1 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.438 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.439 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.439 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:52.439 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:52.439 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:52.439 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.439 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.439 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:52.439 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.439 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:52.439 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:52.439 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:52.439 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.439 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.439 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.007 nvme0n1 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.007 07:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.576 nvme0n1 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.576 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.153 nvme0n1 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.153 07:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.731 nvme0n1 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:54.731 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.732 nvme0n1 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.732 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.991 nvme0n1 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:54.991 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.992 nvme0n1 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.992 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.251 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.251 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.251 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.251 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.251 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.251 07:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.251 nvme0n1 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:55.251 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.252 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.511 nvme0n1 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.511 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.512 nvme0n1 00:17:55.512 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.771 nvme0n1 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.771 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.772 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.030 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.030 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:56.030 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:56.030 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:56.030 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.030 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.030 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:56.030 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.030 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:56.030 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:56.030 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.031 nvme0n1 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.031 07:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.290 nvme0n1 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.290 nvme0n1 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.290 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.550 nvme0n1 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.550 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.812 nvme0n1 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.812 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.071 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.072 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.072 nvme0n1 00:17:57.072 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.072 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.072 07:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.072 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.072 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.072 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.331 nvme0n1 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.331 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.591 nvme0n1 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.591 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.851 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.110 nvme0n1 00:17:58.110 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.110 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.110 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.110 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.110 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.110 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.110 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.110 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.110 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.111 07:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.370 nvme0n1 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:17:58.370 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.371 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.939 nvme0n1 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.939 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.199 nvme0n1 00:17:59.199 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.199 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.199 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.199 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.199 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.199 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.199 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.199 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.199 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.199 07:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.199 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.459 nvme0n1 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.459 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.028 nvme0n1 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:00.028 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:00.288 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.288 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.288 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:00.288 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.288 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:00.288 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:00.288 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:00.288 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.288 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.288 07:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.549 nvme0n1 00:18:00.549 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.549 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.549 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.549 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.549 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.549 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.808 07:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.375 nvme0n1 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:01.375 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:01.376 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:01.376 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.376 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.634 nvme0n1 00:18:01.634 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.900 07:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.468 nvme0n1 00:18:02.468 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.468 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.468 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.468 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.468 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.469 nvme0n1 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.469 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.729 nvme0n1 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.729 nvme0n1 00:18:02.729 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.989 nvme0n1 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.989 07:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.248 nvme0n1 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.249 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.508 nvme0n1 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.508 nvme0n1 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.508 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.768 nvme0n1 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:18:03.768 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.769 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.028 nvme0n1 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.028 07:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.287 nvme0n1 00:18:04.287 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.287 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.287 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.287 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.287 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.287 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.287 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.288 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.547 nvme0n1 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:04.547 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.548 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.548 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:04.548 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.548 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:04.548 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:04.548 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:04.548 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.548 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.548 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.548 nvme0n1 00:18:04.548 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.548 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.548 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.548 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.548 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.806 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.806 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.806 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.806 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.806 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.806 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.806 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.806 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:04.806 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.806 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:04.806 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:04.806 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.807 nvme0n1 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.807 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.066 nvme0n1 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.066 07:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.066 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.324 nvme0n1 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.324 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.584 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.844 nvme0n1 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.844 07:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.103 nvme0n1 00:18:06.103 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.103 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.103 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.103 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.103 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.104 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.104 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.104 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.104 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.104 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.362 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.362 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.362 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:06.362 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.362 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:06.362 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:06.362 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:06.362 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:06.362 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:06.362 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:06.362 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.363 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.622 nvme0n1 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.622 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.882 nvme0n1 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.882 07:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.452 nvme0n1 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmM0YzAzNDI1Y2JiNmFkMDJiMmExYTNiYmFmYmZhNzRsZloA: 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: ]] 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWQ3NDMzOTYyMTYxNGE3ODAwZTg2ZmEzOTFhNWY3NDBhODU4OTA1NjhiNjhhM2VjMmE4ZjJmNjNiNDdkMTUyOVNQFMY=: 00:18:07.452 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.453 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.060 nvme0n1 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.060 07:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.319 nvme0n1 00:18:08.319 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.319 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:08.319 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:08.319 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.319 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.578 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.578 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.578 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.579 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.147 nvme0n1 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjEzM2IyNGM1MjQ4MzFlNmNiMDJkODczNWM4NTBjZWNlYTY5ZTNmMTNkNzcyYjlhAIINMw==: 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: ]] 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjA4NTA0NWVjZDAxMDFjMmU2NjYwOTM5YTc5YmE1ZjIX9Vew: 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.147 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:09.148 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.148 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:09.148 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:09.148 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:09.148 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:09.148 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.148 07:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.716 nvme0n1 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTllMjIxZTQyYjhkYzBlZjI5NWYwNDIzNjM3ZTJlNjY4Njk4OTE3NTdkNzVhZThhYTczNTQzODFhYTI3Njk4ZCFvAi8=: 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.716 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.285 nvme0n1 00:18:10.285 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.285 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.285 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.285 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.285 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:10.285 07:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:10.285 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.286 request: 00:18:10.286 { 00:18:10.286 "name": "nvme0", 00:18:10.286 "trtype": "tcp", 00:18:10.286 "traddr": "10.0.0.1", 00:18:10.286 "adrfam": "ipv4", 00:18:10.286 "trsvcid": "4420", 00:18:10.286 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:10.286 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:10.286 "prchk_reftag": false, 00:18:10.286 "prchk_guard": false, 00:18:10.286 "hdgst": false, 00:18:10.286 "ddgst": false, 00:18:10.286 "allow_unrecognized_csi": false, 00:18:10.286 "method": "bdev_nvme_attach_controller", 00:18:10.286 "req_id": 1 00:18:10.286 } 00:18:10.286 Got JSON-RPC error response 00:18:10.286 response: 00:18:10.286 { 00:18:10.286 "code": -5, 00:18:10.286 "message": "Input/output error" 00:18:10.286 } 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.286 request: 00:18:10.286 { 00:18:10.286 "name": "nvme0", 00:18:10.286 "trtype": "tcp", 00:18:10.286 "traddr": "10.0.0.1", 00:18:10.286 "adrfam": "ipv4", 00:18:10.286 "trsvcid": "4420", 00:18:10.286 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:10.286 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:10.286 "prchk_reftag": false, 00:18:10.286 "prchk_guard": false, 00:18:10.286 "hdgst": false, 00:18:10.286 "ddgst": false, 00:18:10.286 "dhchap_key": "key2", 00:18:10.286 "allow_unrecognized_csi": false, 00:18:10.286 "method": "bdev_nvme_attach_controller", 00:18:10.286 "req_id": 1 00:18:10.286 } 00:18:10.286 Got JSON-RPC error response 00:18:10.286 response: 00:18:10.286 { 00:18:10.286 "code": -5, 00:18:10.286 "message": "Input/output error" 00:18:10.286 } 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.286 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.546 request: 00:18:10.546 { 00:18:10.546 "name": "nvme0", 00:18:10.546 "trtype": "tcp", 00:18:10.546 "traddr": "10.0.0.1", 00:18:10.546 "adrfam": "ipv4", 00:18:10.546 "trsvcid": "4420", 00:18:10.546 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:10.546 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:10.546 "prchk_reftag": false, 00:18:10.546 "prchk_guard": false, 00:18:10.546 "hdgst": false, 00:18:10.546 "ddgst": false, 00:18:10.546 "dhchap_key": "key1", 00:18:10.546 "dhchap_ctrlr_key": "ckey2", 00:18:10.546 "allow_unrecognized_csi": false, 00:18:10.546 "method": "bdev_nvme_attach_controller", 00:18:10.546 "req_id": 1 00:18:10.546 } 00:18:10.546 Got JSON-RPC error response 00:18:10.546 response: 00:18:10.546 { 00:18:10.546 "code": -5, 00:18:10.546 "message": "Input/output error" 00:18:10.546 } 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.546 nvme0n1 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.546 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.546 request: 00:18:10.546 { 00:18:10.546 "name": "nvme0", 00:18:10.547 "dhchap_key": "key1", 00:18:10.547 "dhchap_ctrlr_key": "ckey2", 00:18:10.547 "method": "bdev_nvme_set_keys", 00:18:10.547 "req_id": 1 00:18:10.547 } 00:18:10.547 Got JSON-RPC error response 00:18:10.547 response: 00:18:10.547 { 00:18:10.547 "code": -13, 00:18:10.547 "message": "Permission denied" 00:18:10.547 } 00:18:10.547 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:10.547 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:10.547 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:10.547 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:10.547 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:10.547 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.547 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:10.547 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.547 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.547 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.806 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:18:10.806 07:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGVhMGIwMzNmNzBlZTNlMDFiNzJiMDkwYjIwOTc4ZWQzZGJhMWU1NTgxYjA1ZDdmQN829A==: 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: ]] 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2JkOTRhZWI0NWY2ZmJmZmQ4Y2M2MjEzOGMyNjAwNWUxYjFhYmVlYTAzNDIwMWU5VIt/ZQ==: 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.744 nvme0n1 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQyNjQwNGRjY2I1ZDhhMTAyNmIzYzAxMmIwNzE0ZjUPXdqN: 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: ]] 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDc1M2ZhM2M2YzI5NTY4YzE2NzJjODdmNTc0ODAzMGGBNPBx: 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.744 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:11.745 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.745 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.745 request: 00:18:11.745 { 00:18:11.745 "name": "nvme0", 00:18:11.745 "dhchap_key": "key2", 00:18:11.745 "dhchap_ctrlr_key": "ckey1", 00:18:11.745 "method": "bdev_nvme_set_keys", 00:18:11.745 "req_id": 1 00:18:11.745 } 00:18:11.745 Got JSON-RPC error response 00:18:11.745 response: 00:18:11.745 { 00:18:11.745 "code": -13, 00:18:11.745 "message": "Permission denied" 00:18:11.745 } 00:18:11.745 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:11.745 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:11.745 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:11.745 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:11.745 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:11.745 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.745 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:11.745 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.745 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.003 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.003 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:18:12.003 07:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:12.938 rmmod nvme_tcp 00:18:12.938 rmmod nvme_fabrics 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78037 ']' 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78037 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 78037 ']' 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 78037 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:12.938 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78037 00:18:13.197 killing process with pid 78037 00:18:13.197 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:13.197 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:13.197 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78037' 00:18:13.197 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 78037 00:18:13.197 07:45:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 78037 00:18:13.197 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:13.197 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:13.197 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:13.197 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:18:13.197 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:18:13.197 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:13.197 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.456 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.457 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.716 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:18:13.716 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:13.716 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:13.716 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:13.716 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:13.716 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:18:13.716 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:13.716 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:13.716 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:13.716 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:13.716 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:13.716 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:13.716 07:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:14.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:14.654 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:14.654 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:14.654 07:45:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.TFW /tmp/spdk.key-null.iHa /tmp/spdk.key-sha256.7aD /tmp/spdk.key-sha384.Fgv /tmp/spdk.key-sha512.VJn /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:14.654 07:45:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:15.222 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:15.222 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:15.222 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:15.222 00:18:15.222 real 0m35.549s 00:18:15.222 user 0m32.769s 00:18:15.222 sys 0m4.880s 00:18:15.222 ************************************ 00:18:15.222 END TEST nvmf_auth_host 00:18:15.222 ************************************ 00:18:15.222 07:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:15.222 07:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.222 07:45:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:18:15.222 07:45:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:15.222 07:45:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:15.222 07:45:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:15.222 07:45:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.222 ************************************ 00:18:15.222 START TEST nvmf_digest 00:18:15.222 ************************************ 00:18:15.222 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:15.482 * Looking for test storage... 00:18:15.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:15.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.482 --rc genhtml_branch_coverage=1 00:18:15.482 --rc genhtml_function_coverage=1 00:18:15.482 --rc genhtml_legend=1 00:18:15.482 --rc geninfo_all_blocks=1 00:18:15.482 --rc geninfo_unexecuted_blocks=1 00:18:15.482 00:18:15.482 ' 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:15.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.482 --rc genhtml_branch_coverage=1 00:18:15.482 --rc genhtml_function_coverage=1 00:18:15.482 --rc genhtml_legend=1 00:18:15.482 --rc geninfo_all_blocks=1 00:18:15.482 --rc geninfo_unexecuted_blocks=1 00:18:15.482 00:18:15.482 ' 00:18:15.482 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:15.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.482 --rc genhtml_branch_coverage=1 00:18:15.483 --rc genhtml_function_coverage=1 00:18:15.483 --rc genhtml_legend=1 00:18:15.483 --rc geninfo_all_blocks=1 00:18:15.483 --rc geninfo_unexecuted_blocks=1 00:18:15.483 00:18:15.483 ' 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:15.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.483 --rc genhtml_branch_coverage=1 00:18:15.483 --rc genhtml_function_coverage=1 00:18:15.483 --rc genhtml_legend=1 00:18:15.483 --rc geninfo_all_blocks=1 00:18:15.483 --rc geninfo_unexecuted_blocks=1 00:18:15.483 00:18:15.483 ' 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:15.483 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:15.483 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:15.743 Cannot find device "nvmf_init_br" 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:15.743 Cannot find device "nvmf_init_br2" 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:15.743 Cannot find device "nvmf_tgt_br" 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:15.743 Cannot find device "nvmf_tgt_br2" 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:15.743 Cannot find device "nvmf_init_br" 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:15.743 Cannot find device "nvmf_init_br2" 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:15.743 Cannot find device "nvmf_tgt_br" 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:15.743 Cannot find device "nvmf_tgt_br2" 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:15.743 Cannot find device "nvmf_br" 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:15.743 Cannot find device "nvmf_init_if" 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:15.743 Cannot find device "nvmf_init_if2" 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:15.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:15.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:15.743 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:16.002 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:16.002 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:16.003 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:16.003 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:18:16.003 00:18:16.003 --- 10.0.0.3 ping statistics --- 00:18:16.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.003 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:16.003 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:16.003 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:18:16.003 00:18:16.003 --- 10.0.0.4 ping statistics --- 00:18:16.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.003 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:16.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:16.003 00:18:16.003 --- 10.0.0.1 ping statistics --- 00:18:16.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.003 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:16.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:18:16.003 00:18:16.003 --- 10.0.0.2 ping statistics --- 00:18:16.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.003 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:16.003 ************************************ 00:18:16.003 START TEST nvmf_digest_clean 00:18:16.003 ************************************ 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79670 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79670 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79670 ']' 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:16.003 07:45:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:16.262 [2024-11-08 07:45:33.963823] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:18:16.262 [2024-11-08 07:45:33.963913] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.262 [2024-11-08 07:45:34.114681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.262 [2024-11-08 07:45:34.189942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.262 [2024-11-08 07:45:34.190001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.262 [2024-11-08 07:45:34.190011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.262 [2024-11-08 07:45:34.190020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.262 [2024-11-08 07:45:34.190027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.262 [2024-11-08 07:45:34.190431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.200 07:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:17.200 07:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:18:17.200 07:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.200 07:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:17.200 07:45:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:17.200 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.200 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:17.200 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:17.200 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:17.200 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.200 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:17.200 [2024-11-08 07:45:35.109356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:17.459 null0 00:18:17.459 [2024-11-08 07:45:35.176468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.459 [2024-11-08 07:45:35.200606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:17.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79702 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79702 /var/tmp/bperf.sock 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79702 ']' 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:17.459 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:17.459 [2024-11-08 07:45:35.246179] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:18:17.459 [2024-11-08 07:45:35.246590] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79702 ] 00:18:17.459 [2024-11-08 07:45:35.396617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.718 [2024-11-08 07:45:35.454061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.718 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:17.718 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:18:17.718 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:17.718 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:17.718 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:17.977 [2024-11-08 07:45:35.739683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:17.977 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:17.977 07:45:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:18.236 nvme0n1 00:18:18.236 07:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:18.236 07:45:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:18.236 Running I/O for 2 seconds... 00:18:20.181 19812.00 IOPS, 77.39 MiB/s [2024-11-08T07:45:38.142Z] 19939.00 IOPS, 77.89 MiB/s 00:18:20.181 Latency(us) 00:18:20.181 [2024-11-08T07:45:38.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.181 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:20.181 nvme0n1 : 2.00 19965.01 77.99 0.00 0.00 6407.47 6179.11 18974.23 00:18:20.181 [2024-11-08T07:45:38.142Z] =================================================================================================================== 00:18:20.181 [2024-11-08T07:45:38.142Z] Total : 19965.01 77.99 0.00 0.00 6407.47 6179.11 18974.23 00:18:20.181 { 00:18:20.181 "results": [ 00:18:20.181 { 00:18:20.181 "job": "nvme0n1", 00:18:20.181 "core_mask": "0x2", 00:18:20.181 "workload": "randread", 00:18:20.181 "status": "finished", 00:18:20.181 "queue_depth": 128, 00:18:20.181 "io_size": 4096, 00:18:20.181 "runtime": 2.003806, 00:18:20.181 "iops": 19965.00659245456, 00:18:20.181 "mibps": 77.98830700177562, 00:18:20.181 "io_failed": 0, 00:18:20.181 "io_timeout": 0, 00:18:20.181 "avg_latency_us": 6407.465327438978, 00:18:20.181 "min_latency_us": 6179.108571428572, 00:18:20.181 "max_latency_us": 18974.23238095238 00:18:20.181 } 00:18:20.181 ], 00:18:20.181 "core_count": 1 00:18:20.181 } 00:18:20.440 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:20.440 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:20.440 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:20.440 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:20.440 | select(.opcode=="crc32c") 00:18:20.440 | "\(.module_name) \(.executed)"' 00:18:20.440 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:20.699 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:20.699 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:20.699 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:20.699 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:20.699 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79702 00:18:20.699 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79702 ']' 00:18:20.699 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79702 00:18:20.699 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:18:20.699 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:20.699 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79702 00:18:20.699 killing process with pid 79702 00:18:20.699 Received shutdown signal, test time was about 2.000000 seconds 00:18:20.699 00:18:20.699 Latency(us) 00:18:20.699 [2024-11-08T07:45:38.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.699 [2024-11-08T07:45:38.660Z] =================================================================================================================== 00:18:20.699 [2024-11-08T07:45:38.660Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.699 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79702' 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79702 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79702 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79755 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79755 /var/tmp/bperf.sock 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79755 ']' 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:20.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:20.700 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:20.700 [2024-11-08 07:45:38.651322] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:18:20.700 [2024-11-08 07:45:38.652134] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79755 ] 00:18:20.700 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:20.700 Zero copy mechanism will not be used. 00:18:20.959 [2024-11-08 07:45:38.793176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.959 [2024-11-08 07:45:38.836547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.959 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:20.959 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:18:20.959 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:20.959 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:20.959 07:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:21.526 [2024-11-08 07:45:39.193575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:21.526 07:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:21.526 07:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:21.797 nvme0n1 00:18:21.797 07:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:21.797 07:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:21.797 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:21.797 Zero copy mechanism will not be used. 00:18:21.797 Running I/O for 2 seconds... 00:18:24.114 7440.00 IOPS, 930.00 MiB/s [2024-11-08T07:45:42.075Z] 7480.00 IOPS, 935.00 MiB/s 00:18:24.114 Latency(us) 00:18:24.114 [2024-11-08T07:45:42.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.114 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:24.114 nvme0n1 : 2.00 7477.74 934.72 0.00 0.00 2137.10 2028.50 4213.03 00:18:24.114 [2024-11-08T07:45:42.075Z] =================================================================================================================== 00:18:24.114 [2024-11-08T07:45:42.075Z] Total : 7477.74 934.72 0.00 0.00 2137.10 2028.50 4213.03 00:18:24.114 { 00:18:24.114 "results": [ 00:18:24.114 { 00:18:24.114 "job": "nvme0n1", 00:18:24.114 "core_mask": "0x2", 00:18:24.114 "workload": "randread", 00:18:24.114 "status": "finished", 00:18:24.114 "queue_depth": 16, 00:18:24.114 "io_size": 131072, 00:18:24.114 "runtime": 2.002745, 00:18:24.114 "iops": 7477.736806233444, 00:18:24.114 "mibps": 934.7171007791806, 00:18:24.114 "io_failed": 0, 00:18:24.114 "io_timeout": 0, 00:18:24.114 "avg_latency_us": 2137.1040455840457, 00:18:24.114 "min_latency_us": 2028.4952380952382, 00:18:24.114 "max_latency_us": 4213.028571428571 00:18:24.114 } 00:18:24.114 ], 00:18:24.114 "core_count": 1 00:18:24.114 } 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:24.114 | select(.opcode=="crc32c") 00:18:24.114 | "\(.module_name) \(.executed)"' 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79755 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79755 ']' 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79755 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79755 00:18:24.114 killing process with pid 79755 00:18:24.114 Received shutdown signal, test time was about 2.000000 seconds 00:18:24.114 00:18:24.114 Latency(us) 00:18:24.114 [2024-11-08T07:45:42.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.114 [2024-11-08T07:45:42.075Z] =================================================================================================================== 00:18:24.114 [2024-11-08T07:45:42.075Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79755' 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79755 00:18:24.114 07:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79755 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79802 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79802 /var/tmp/bperf.sock 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79802 ']' 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:24.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:24.373 07:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:24.373 [2024-11-08 07:45:42.205625] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:18:24.373 [2024-11-08 07:45:42.205895] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79802 ] 00:18:24.631 [2024-11-08 07:45:42.350870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.631 [2024-11-08 07:45:42.392606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.198 07:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:25.198 07:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:18:25.198 07:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:25.198 07:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:25.198 07:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:25.457 [2024-11-08 07:45:43.332891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:25.457 07:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:25.457 07:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:26.024 nvme0n1 00:18:26.024 07:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:26.024 07:45:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:26.024 Running I/O for 2 seconds... 00:18:27.897 21337.00 IOPS, 83.35 MiB/s [2024-11-08T07:45:45.858Z] 21336.50 IOPS, 83.35 MiB/s 00:18:27.897 Latency(us) 00:18:27.897 [2024-11-08T07:45:45.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.897 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:27.897 nvme0n1 : 2.00 21365.98 83.46 0.00 0.00 5986.30 1825.65 12483.05 00:18:27.897 [2024-11-08T07:45:45.858Z] =================================================================================================================== 00:18:27.897 [2024-11-08T07:45:45.858Z] Total : 21365.98 83.46 0.00 0.00 5986.30 1825.65 12483.05 00:18:27.897 { 00:18:27.897 "results": [ 00:18:27.897 { 00:18:27.897 "job": "nvme0n1", 00:18:27.897 "core_mask": "0x2", 00:18:27.897 "workload": "randwrite", 00:18:27.897 "status": "finished", 00:18:27.897 "queue_depth": 128, 00:18:27.897 "io_size": 4096, 00:18:27.897 "runtime": 2.003231, 00:18:27.897 "iops": 21365.983254053077, 00:18:27.897 "mibps": 83.46087208614483, 00:18:27.897 "io_failed": 0, 00:18:27.897 "io_timeout": 0, 00:18:27.897 "avg_latency_us": 5986.29711197224, 00:18:27.897 "min_latency_us": 1825.6457142857143, 00:18:27.897 "max_latency_us": 12483.047619047618 00:18:27.897 } 00:18:27.897 ], 00:18:27.897 "core_count": 1 00:18:27.897 } 00:18:27.897 07:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:27.897 07:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:27.897 07:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:27.897 07:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:27.897 07:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:27.897 | select(.opcode=="crc32c") 00:18:27.897 | "\(.module_name) \(.executed)"' 00:18:28.156 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:28.156 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:28.156 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:28.156 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:28.156 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79802 00:18:28.156 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79802 ']' 00:18:28.156 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79802 00:18:28.156 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:18:28.156 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:28.156 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79802 00:18:28.415 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:28.415 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:28.415 killing process with pid 79802 00:18:28.415 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79802' 00:18:28.415 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79802 00:18:28.415 Received shutdown signal, test time was about 2.000000 seconds 00:18:28.415 00:18:28.415 Latency(us) 00:18:28.415 [2024-11-08T07:45:46.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.415 [2024-11-08T07:45:46.376Z] =================================================================================================================== 00:18:28.415 [2024-11-08T07:45:46.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.415 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79802 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79859 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79859 /var/tmp/bperf.sock 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79859 ']' 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:28.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:28.674 07:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:28.674 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:28.674 Zero copy mechanism will not be used. 00:18:28.674 [2024-11-08 07:45:46.424805] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:18:28.674 [2024-11-08 07:45:46.424894] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79859 ] 00:18:28.674 [2024-11-08 07:45:46.565970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.933 [2024-11-08 07:45:46.637531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.499 07:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:29.499 07:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:18:29.499 07:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:29.499 07:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:29.499 07:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:29.758 [2024-11-08 07:45:47.613928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:29.758 07:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:29.758 07:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:30.325 nvme0n1 00:18:30.326 07:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:30.326 07:45:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:30.326 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:30.326 Zero copy mechanism will not be used. 00:18:30.326 Running I/O for 2 seconds... 00:18:32.198 7202.00 IOPS, 900.25 MiB/s [2024-11-08T07:45:50.159Z] 7230.00 IOPS, 903.75 MiB/s 00:18:32.198 Latency(us) 00:18:32.198 [2024-11-08T07:45:50.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.198 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:32.198 nvme0n1 : 2.00 7227.23 903.40 0.00 0.00 2210.25 1693.01 7957.94 00:18:32.198 [2024-11-08T07:45:50.159Z] =================================================================================================================== 00:18:32.198 [2024-11-08T07:45:50.159Z] Total : 7227.23 903.40 0.00 0.00 2210.25 1693.01 7957.94 00:18:32.198 { 00:18:32.198 "results": [ 00:18:32.198 { 00:18:32.198 "job": "nvme0n1", 00:18:32.198 "core_mask": "0x2", 00:18:32.198 "workload": "randwrite", 00:18:32.198 "status": "finished", 00:18:32.198 "queue_depth": 16, 00:18:32.198 "io_size": 131072, 00:18:32.198 "runtime": 2.003672, 00:18:32.198 "iops": 7227.230804243409, 00:18:32.198 "mibps": 903.4038505304261, 00:18:32.198 "io_failed": 0, 00:18:32.198 "io_timeout": 0, 00:18:32.198 "avg_latency_us": 2210.2509026935127, 00:18:32.198 "min_latency_us": 1693.0133333333333, 00:18:32.198 "max_latency_us": 7957.942857142857 00:18:32.198 } 00:18:32.198 ], 00:18:32.198 "core_count": 1 00:18:32.198 } 00:18:32.198 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:32.198 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:32.198 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:32.198 | select(.opcode=="crc32c") 00:18:32.198 | "\(.module_name) \(.executed)"' 00:18:32.198 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:32.198 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79859 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79859 ']' 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79859 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79859 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:32.457 killing process with pid 79859 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79859' 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79859 00:18:32.457 Received shutdown signal, test time was about 2.000000 seconds 00:18:32.457 00:18:32.457 Latency(us) 00:18:32.457 [2024-11-08T07:45:50.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.457 [2024-11-08T07:45:50.418Z] =================================================================================================================== 00:18:32.457 [2024-11-08T07:45:50.418Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:32.457 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79859 00:18:32.716 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79670 00:18:32.717 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79670 ']' 00:18:32.717 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79670 00:18:32.717 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:18:32.717 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79670 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:32.976 killing process with pid 79670 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79670' 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79670 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79670 00:18:32.976 00:18:32.976 real 0m16.970s 00:18:32.976 user 0m30.281s 00:18:32.976 sys 0m6.507s 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:32.976 ************************************ 00:18:32.976 END TEST nvmf_digest_clean 00:18:32.976 ************************************ 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:32.976 ************************************ 00:18:32.976 START TEST nvmf_digest_error 00:18:32.976 ************************************ 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:32.976 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:33.235 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=79947 00:18:33.235 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 79947 00:18:33.235 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:33.235 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 79947 ']' 00:18:33.235 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.235 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:33.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.235 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.235 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:33.235 07:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:33.235 [2024-11-08 07:45:50.996177] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:18:33.235 [2024-11-08 07:45:50.996277] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.235 [2024-11-08 07:45:51.144266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.235 [2024-11-08 07:45:51.190881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.235 [2024-11-08 07:45:51.190927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.235 [2024-11-08 07:45:51.190937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.235 [2024-11-08 07:45:51.190945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.235 [2024-11-08 07:45:51.190968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.235 [2024-11-08 07:45:51.191243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.173 07:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:34.173 07:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:18:34.173 07:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:34.173 07:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:34.173 07:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:34.173 07:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.173 07:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:34.173 07:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.173 07:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:34.173 [2024-11-08 07:45:51.939697] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:34.173 07:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.173 07:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:34.173 07:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:34.173 07:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.173 07:45:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:34.173 [2024-11-08 07:45:51.988957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:34.173 null0 00:18:34.173 [2024-11-08 07:45:52.032800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.173 [2024-11-08 07:45:52.056911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79979 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79979 /var/tmp/bperf.sock 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 79979 ']' 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:34.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:34.173 07:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:34.173 [2024-11-08 07:45:52.120685] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:18:34.173 [2024-11-08 07:45:52.120780] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79979 ] 00:18:34.432 [2024-11-08 07:45:52.268401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.432 [2024-11-08 07:45:52.331373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.691 [2024-11-08 07:45:52.411640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:35.259 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:35.259 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:18:35.259 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:35.259 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:35.518 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:35.518 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.518 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:35.518 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.518 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:35.518 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:35.777 nvme0n1 00:18:35.777 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:35.777 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.777 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:35.777 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.777 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:35.777 07:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:35.777 Running I/O for 2 seconds... 00:18:35.777 [2024-11-08 07:45:53.692989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:35.777 [2024-11-08 07:45:53.693039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-11-08 07:45:53.693054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.777 [2024-11-08 07:45:53.706228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:35.777 [2024-11-08 07:45:53.706261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-11-08 07:45:53.706272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.777 [2024-11-08 07:45:53.719183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:35.777 [2024-11-08 07:45:53.719215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-11-08 07:45:53.719226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.777 [2024-11-08 07:45:53.732191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:35.777 [2024-11-08 07:45:53.732222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-11-08 07:45:53.732233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.745649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.745680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.745690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.758543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.758575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.758586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.771558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.771589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.771601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.784552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.784581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.784592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.797505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.797545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.797556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.810313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.810344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.810355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.823237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.823268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.823278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.836063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.836092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.836103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.848835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.848866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.848876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.861634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.861665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.861675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.874449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.874480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.874490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.887377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.887407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.887418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.900267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.900295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.900306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.913032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.913076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.913087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.925959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.925997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.926008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.938779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.938812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.938822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.951602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.951632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.037 [2024-11-08 07:45:53.951643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.037 [2024-11-08 07:45:53.964675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.037 [2024-11-08 07:45:53.964707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.038 [2024-11-08 07:45:53.964719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.038 [2024-11-08 07:45:53.977907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.038 [2024-11-08 07:45:53.977940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.038 [2024-11-08 07:45:53.977952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.038 [2024-11-08 07:45:53.991214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.038 [2024-11-08 07:45:53.991246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.038 [2024-11-08 07:45:53.991258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.004767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.297 [2024-11-08 07:45:54.004796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.297 [2024-11-08 07:45:54.004807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.017631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.297 [2024-11-08 07:45:54.017661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.297 [2024-11-08 07:45:54.017671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.030491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.297 [2024-11-08 07:45:54.030520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.297 [2024-11-08 07:45:54.030530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.043346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.297 [2024-11-08 07:45:54.043376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.297 [2024-11-08 07:45:54.043387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.056173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.297 [2024-11-08 07:45:54.056200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.297 [2024-11-08 07:45:54.056211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.068991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.297 [2024-11-08 07:45:54.069020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.297 [2024-11-08 07:45:54.069030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.081845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.297 [2024-11-08 07:45:54.081875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.297 [2024-11-08 07:45:54.081885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.094722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.297 [2024-11-08 07:45:54.094753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.297 [2024-11-08 07:45:54.094763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.107623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.297 [2024-11-08 07:45:54.107653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.297 [2024-11-08 07:45:54.107664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.120505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.297 [2024-11-08 07:45:54.120534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.297 [2024-11-08 07:45:54.120545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.133371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.297 [2024-11-08 07:45:54.133401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.297 [2024-11-08 07:45:54.133411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.146188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.297 [2024-11-08 07:45:54.146215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.297 [2024-11-08 07:45:54.146226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.159006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.297 [2024-11-08 07:45:54.159035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.297 [2024-11-08 07:45:54.159045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.171899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.297 [2024-11-08 07:45:54.171931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.297 [2024-11-08 07:45:54.171942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.297 [2024-11-08 07:45:54.184873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.298 [2024-11-08 07:45:54.184903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.298 [2024-11-08 07:45:54.184914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.298 [2024-11-08 07:45:54.197973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.298 [2024-11-08 07:45:54.198024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.298 [2024-11-08 07:45:54.198034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.298 [2024-11-08 07:45:54.211027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.298 [2024-11-08 07:45:54.211056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.298 [2024-11-08 07:45:54.211066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.298 [2024-11-08 07:45:54.224272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.298 [2024-11-08 07:45:54.224301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.298 [2024-11-08 07:45:54.224311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.298 [2024-11-08 07:45:54.237473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.298 [2024-11-08 07:45:54.237503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.298 [2024-11-08 07:45:54.237513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.298 [2024-11-08 07:45:54.250311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.298 [2024-11-08 07:45:54.250339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.298 [2024-11-08 07:45:54.250349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.557 [2024-11-08 07:45:54.263864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.557 [2024-11-08 07:45:54.263893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.557 [2024-11-08 07:45:54.263904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.557 [2024-11-08 07:45:54.276814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.557 [2024-11-08 07:45:54.276843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.557 [2024-11-08 07:45:54.276853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.557 [2024-11-08 07:45:54.290008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.557 [2024-11-08 07:45:54.290036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.557 [2024-11-08 07:45:54.290047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.557 [2024-11-08 07:45:54.303051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.557 [2024-11-08 07:45:54.303081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.557 [2024-11-08 07:45:54.303092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.557 [2024-11-08 07:45:54.316146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.557 [2024-11-08 07:45:54.316175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.557 [2024-11-08 07:45:54.316185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.557 [2024-11-08 07:45:54.329527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.557 [2024-11-08 07:45:54.329556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.557 [2024-11-08 07:45:54.329567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.557 [2024-11-08 07:45:54.342910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.557 [2024-11-08 07:45:54.342941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.557 [2024-11-08 07:45:54.342952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.557 [2024-11-08 07:45:54.355889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.557 [2024-11-08 07:45:54.355919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.557 [2024-11-08 07:45:54.355929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.557 [2024-11-08 07:45:54.368805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.557 [2024-11-08 07:45:54.368833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.557 [2024-11-08 07:45:54.368844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.557 [2024-11-08 07:45:54.381716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.557 [2024-11-08 07:45:54.381745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.557 [2024-11-08 07:45:54.381755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.557 [2024-11-08 07:45:54.394658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.557 [2024-11-08 07:45:54.394691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.557 [2024-11-08 07:45:54.394702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.557 [2024-11-08 07:45:54.407571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.557 [2024-11-08 07:45:54.407600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.557 [2024-11-08 07:45:54.407611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.557 [2024-11-08 07:45:54.420513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.557 [2024-11-08 07:45:54.420542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.557 [2024-11-08 07:45:54.420552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.557 [2024-11-08 07:45:54.433298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.558 [2024-11-08 07:45:54.433326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.558 [2024-11-08 07:45:54.433336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.558 [2024-11-08 07:45:54.446105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.558 [2024-11-08 07:45:54.446133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.558 [2024-11-08 07:45:54.446144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.558 [2024-11-08 07:45:54.458918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.558 [2024-11-08 07:45:54.458951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.558 [2024-11-08 07:45:54.458962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.558 [2024-11-08 07:45:54.471817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.558 [2024-11-08 07:45:54.471858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.558 [2024-11-08 07:45:54.471868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.558 [2024-11-08 07:45:54.484708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.558 [2024-11-08 07:45:54.484737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.558 [2024-11-08 07:45:54.484747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.558 [2024-11-08 07:45:54.497591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.558 [2024-11-08 07:45:54.497622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.558 [2024-11-08 07:45:54.497633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.817 [2024-11-08 07:45:54.516438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.817 [2024-11-08 07:45:54.516472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.817 [2024-11-08 07:45:54.516483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.817 [2024-11-08 07:45:54.529972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.817 [2024-11-08 07:45:54.530013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.817 [2024-11-08 07:45:54.530024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.817 [2024-11-08 07:45:54.543051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.817 [2024-11-08 07:45:54.543082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.817 [2024-11-08 07:45:54.543092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.555868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.555899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.555909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.568824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.568855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.568865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.581776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.581809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.581820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.594600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.594655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.594666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.607526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.607557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.607568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.620354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.620383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.620394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.633128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.633156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.633166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.645911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.645944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.645955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.658734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.658766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.658777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.671885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.671920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.671934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 19482.00 IOPS, 76.10 MiB/s [2024-11-08T07:45:54.779Z] [2024-11-08 07:45:54.686643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.686674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.686685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.699587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.699619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.699630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.712401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.712430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.712440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.725288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.725316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.725326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.738159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.738187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.738198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.750956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.750996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.751007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.818 [2024-11-08 07:45:54.763883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:36.818 [2024-11-08 07:45:54.763913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.818 [2024-11-08 07:45:54.763924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.777205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.777237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.777249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.790445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.790476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.790486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.803415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.803445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.803456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.816225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.816253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.816263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.829028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.829056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.829067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.841804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.841834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.841844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.854579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.854616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.854627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.867512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.867542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.867553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.880449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.880478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.880489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.893263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.893293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.893303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.906148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.906178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.906189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.919062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.919091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.919103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.931955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.931995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.932006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.944761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.944791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.944801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.957538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.957567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.957577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.970437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.970467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.970477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.983433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.983463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.983473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:54.996669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:54.996702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:54.996713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:55.009944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:55.009991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:55.010003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.078 [2024-11-08 07:45:55.023230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.078 [2024-11-08 07:45:55.023262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.078 [2024-11-08 07:45:55.023273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.355 [2024-11-08 07:45:55.036662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.355 [2024-11-08 07:45:55.036693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.355 [2024-11-08 07:45:55.036704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.355 [2024-11-08 07:45:55.049907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.355 [2024-11-08 07:45:55.049937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.355 [2024-11-08 07:45:55.049948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.355 [2024-11-08 07:45:55.062925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.355 [2024-11-08 07:45:55.062955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.355 [2024-11-08 07:45:55.062966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.355 [2024-11-08 07:45:55.075825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.355 [2024-11-08 07:45:55.075866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.355 [2024-11-08 07:45:55.075876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.355 [2024-11-08 07:45:55.088697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.355 [2024-11-08 07:45:55.088727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.355 [2024-11-08 07:45:55.088737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.355 [2024-11-08 07:45:55.101575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.355 [2024-11-08 07:45:55.101606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.355 [2024-11-08 07:45:55.101616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.355 [2024-11-08 07:45:55.114451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.355 [2024-11-08 07:45:55.114482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.355 [2024-11-08 07:45:55.114492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.355 [2024-11-08 07:45:55.127425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.355 [2024-11-08 07:45:55.127455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.355 [2024-11-08 07:45:55.127466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.355 [2024-11-08 07:45:55.140226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.355 [2024-11-08 07:45:55.140255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.355 [2024-11-08 07:45:55.140265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.355 [2024-11-08 07:45:55.152992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.355 [2024-11-08 07:45:55.153021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.355 [2024-11-08 07:45:55.153031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.355 [2024-11-08 07:45:55.165951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.355 [2024-11-08 07:45:55.165991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.355 [2024-11-08 07:45:55.166002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.355 [2024-11-08 07:45:55.179308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.355 [2024-11-08 07:45:55.179338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.355 [2024-11-08 07:45:55.179349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.355 [2024-11-08 07:45:55.192265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.356 [2024-11-08 07:45:55.192293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.356 [2024-11-08 07:45:55.192304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.356 [2024-11-08 07:45:55.205058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.356 [2024-11-08 07:45:55.205086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.356 [2024-11-08 07:45:55.205096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.356 [2024-11-08 07:45:55.218282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.356 [2024-11-08 07:45:55.218313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.356 [2024-11-08 07:45:55.218324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.356 [2024-11-08 07:45:55.231497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.356 [2024-11-08 07:45:55.231527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.356 [2024-11-08 07:45:55.231537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.356 [2024-11-08 07:45:55.244528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.356 [2024-11-08 07:45:55.244557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.356 [2024-11-08 07:45:55.244567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.356 [2024-11-08 07:45:55.257305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.356 [2024-11-08 07:45:55.257336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.356 [2024-11-08 07:45:55.257347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.356 [2024-11-08 07:45:55.270265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.356 [2024-11-08 07:45:55.270294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.356 [2024-11-08 07:45:55.270304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.356 [2024-11-08 07:45:55.283181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.356 [2024-11-08 07:45:55.283211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.356 [2024-11-08 07:45:55.283222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.356 [2024-11-08 07:45:55.296736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.356 [2024-11-08 07:45:55.296769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.356 [2024-11-08 07:45:55.296781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.661 [2024-11-08 07:45:55.310372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.661 [2024-11-08 07:45:55.310404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.661 [2024-11-08 07:45:55.310416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.661 [2024-11-08 07:45:55.323965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.661 [2024-11-08 07:45:55.324004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.661 [2024-11-08 07:45:55.324015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.661 [2024-11-08 07:45:55.337348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.661 [2024-11-08 07:45:55.337378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.661 [2024-11-08 07:45:55.337389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.661 [2024-11-08 07:45:55.355977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.661 [2024-11-08 07:45:55.356015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.661 [2024-11-08 07:45:55.356026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.661 [2024-11-08 07:45:55.369062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.661 [2024-11-08 07:45:55.369093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.661 [2024-11-08 07:45:55.369104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.661 [2024-11-08 07:45:55.382021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.661 [2024-11-08 07:45:55.382049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.661 [2024-11-08 07:45:55.382060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.661 [2024-11-08 07:45:55.395135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.661 [2024-11-08 07:45:55.395167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.661 [2024-11-08 07:45:55.395178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.661 [2024-11-08 07:45:55.407951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.661 [2024-11-08 07:45:55.407988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.661 [2024-11-08 07:45:55.407999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.661 [2024-11-08 07:45:55.420840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.661 [2024-11-08 07:45:55.420886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.661 [2024-11-08 07:45:55.420897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.661 [2024-11-08 07:45:55.433772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.661 [2024-11-08 07:45:55.433803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.661 [2024-11-08 07:45:55.433813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.661 [2024-11-08 07:45:55.446562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.661 [2024-11-08 07:45:55.446590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.662 [2024-11-08 07:45:55.446600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.662 [2024-11-08 07:45:55.459399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.662 [2024-11-08 07:45:55.459429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.662 [2024-11-08 07:45:55.459439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.662 [2024-11-08 07:45:55.472404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.662 [2024-11-08 07:45:55.472433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.662 [2024-11-08 07:45:55.472443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.662 [2024-11-08 07:45:55.485209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.662 [2024-11-08 07:45:55.485239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.662 [2024-11-08 07:45:55.485249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.662 [2024-11-08 07:45:55.498090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.662 [2024-11-08 07:45:55.498118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.662 [2024-11-08 07:45:55.498128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.662 [2024-11-08 07:45:55.510921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.662 [2024-11-08 07:45:55.510952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.662 [2024-11-08 07:45:55.510963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.662 [2024-11-08 07:45:55.523807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.662 [2024-11-08 07:45:55.523847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.662 [2024-11-08 07:45:55.523857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.662 [2024-11-08 07:45:55.536664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.662 [2024-11-08 07:45:55.536693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.662 [2024-11-08 07:45:55.536704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.662 [2024-11-08 07:45:55.549470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.662 [2024-11-08 07:45:55.549499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.662 [2024-11-08 07:45:55.549509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.662 [2024-11-08 07:45:55.562294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.662 [2024-11-08 07:45:55.562324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.662 [2024-11-08 07:45:55.562335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.662 [2024-11-08 07:45:55.575321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.662 [2024-11-08 07:45:55.575351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.662 [2024-11-08 07:45:55.575362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.662 [2024-11-08 07:45:55.588214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.662 [2024-11-08 07:45:55.588244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.662 [2024-11-08 07:45:55.588255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.662 [2024-11-08 07:45:55.601019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.662 [2024-11-08 07:45:55.601055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.662 [2024-11-08 07:45:55.601066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.662 [2024-11-08 07:45:55.614069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.662 [2024-11-08 07:45:55.614100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.662 [2024-11-08 07:45:55.614111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.936 [2024-11-08 07:45:55.627694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.936 [2024-11-08 07:45:55.627725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.936 [2024-11-08 07:45:55.627737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.936 [2024-11-08 07:45:55.641100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.936 [2024-11-08 07:45:55.641129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.936 [2024-11-08 07:45:55.641141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.936 [2024-11-08 07:45:55.654514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.936 [2024-11-08 07:45:55.654544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.936 [2024-11-08 07:45:55.654556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.936 [2024-11-08 07:45:55.667831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x154b370) 00:18:37.937 [2024-11-08 07:45:55.667872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.937 [2024-11-08 07:45:55.667883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.937 19418.50 IOPS, 75.85 MiB/s 00:18:37.937 Latency(us) 00:18:37.937 [2024-11-08T07:45:55.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.937 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:37.937 nvme0n1 : 2.00 19445.19 75.96 0.00 0.00 6577.89 6272.73 25964.74 00:18:37.937 [2024-11-08T07:45:55.898Z] =================================================================================================================== 00:18:37.937 [2024-11-08T07:45:55.898Z] Total : 19445.19 75.96 0.00 0.00 6577.89 6272.73 25964.74 00:18:37.937 { 00:18:37.937 "results": [ 00:18:37.937 { 00:18:37.937 "job": "nvme0n1", 00:18:37.937 "core_mask": "0x2", 00:18:37.937 "workload": "randread", 00:18:37.937 "status": "finished", 00:18:37.937 "queue_depth": 128, 00:18:37.937 "io_size": 4096, 00:18:37.937 "runtime": 2.003837, 00:18:37.937 "iops": 19445.194394554048, 00:18:37.937 "mibps": 75.95779060372675, 00:18:37.937 "io_failed": 0, 00:18:37.937 "io_timeout": 0, 00:18:37.937 "avg_latency_us": 6577.894128283624, 00:18:37.937 "min_latency_us": 6272.731428571428, 00:18:37.937 "max_latency_us": 25964.73904761905 00:18:37.937 } 00:18:37.937 ], 00:18:37.937 "core_count": 1 00:18:37.937 } 00:18:37.937 07:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:37.937 07:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:37.937 07:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:37.937 | .driver_specific 00:18:37.937 | .nvme_error 00:18:37.937 | .status_code 00:18:37.937 | .command_transient_transport_error' 00:18:37.937 07:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:38.208 07:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:18:38.208 07:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79979 00:18:38.208 07:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 79979 ']' 00:18:38.208 07:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 79979 00:18:38.208 07:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:38.208 07:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:38.208 07:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79979 00:18:38.208 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:38.208 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:38.208 killing process with pid 79979 00:18:38.208 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79979' 00:18:38.208 Received shutdown signal, test time was about 2.000000 seconds 00:18:38.208 00:18:38.208 Latency(us) 00:18:38.208 [2024-11-08T07:45:56.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.208 [2024-11-08T07:45:56.169Z] =================================================================================================================== 00:18:38.208 [2024-11-08T07:45:56.169Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.208 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 79979 00:18:38.208 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 79979 00:18:38.468 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:38.468 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:38.468 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:38.468 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:38.468 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:38.468 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:38.468 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80039 00:18:38.468 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80039 /var/tmp/bperf.sock 00:18:38.468 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80039 ']' 00:18:38.468 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:38.468 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:38.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:38.468 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:38.468 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:38.468 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:38.468 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:38.468 Zero copy mechanism will not be used. 00:18:38.468 [2024-11-08 07:45:56.341598] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:18:38.468 [2024-11-08 07:45:56.341674] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80039 ] 00:18:38.727 [2024-11-08 07:45:56.482252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.727 [2024-11-08 07:45:56.546494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.727 [2024-11-08 07:45:56.626713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:38.987 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:38.987 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:18:38.987 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:38.987 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:38.987 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:38.987 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.987 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:38.987 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.987 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:38.987 07:45:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:39.557 nvme0n1 00:18:39.557 07:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:39.557 07:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.557 07:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:39.557 07:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.557 07:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:39.557 07:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:39.557 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:39.557 Zero copy mechanism will not be used. 00:18:39.557 Running I/O for 2 seconds... 00:18:39.557 [2024-11-08 07:45:57.388472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.557 [2024-11-08 07:45:57.388528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.388541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.392205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.392239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.392251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.395733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.395767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.395778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.399330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.399362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.399373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.403037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.403067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.403078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.406596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.406651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.406663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.410217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.410248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.410259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.413760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.413791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.413801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.417381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.417413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.417423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.420960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.420999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.421010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.424530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.424560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.424570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.428178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.428208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.428218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.431767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.431798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.431809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.435355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.435387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.435399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.438959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.439000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.439011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.442512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.442542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.442553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.446085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.446115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.446125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.449728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.449758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.449768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.453341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.453372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.453382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.457006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.457035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.457046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.460572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.460602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.460612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.464172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.464202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.558 [2024-11-08 07:45:57.464213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.558 [2024-11-08 07:45:57.467822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.558 [2024-11-08 07:45:57.467862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.559 [2024-11-08 07:45:57.467872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.559 [2024-11-08 07:45:57.471444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.559 [2024-11-08 07:45:57.471475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.559 [2024-11-08 07:45:57.471486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.559 [2024-11-08 07:45:57.479225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.559 [2024-11-08 07:45:57.479262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.559 [2024-11-08 07:45:57.479289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.559 [2024-11-08 07:45:57.485332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.559 [2024-11-08 07:45:57.485363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.559 [2024-11-08 07:45:57.485373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.559 [2024-11-08 07:45:57.490900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.559 [2024-11-08 07:45:57.490932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.559 [2024-11-08 07:45:57.490944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.559 [2024-11-08 07:45:57.496179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.559 [2024-11-08 07:45:57.496209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.559 [2024-11-08 07:45:57.496220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.559 [2024-11-08 07:45:57.501368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.559 [2024-11-08 07:45:57.501400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.559 [2024-11-08 07:45:57.501427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.559 [2024-11-08 07:45:57.506596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.559 [2024-11-08 07:45:57.506635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.559 [2024-11-08 07:45:57.506645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.559 [2024-11-08 07:45:57.511895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.559 [2024-11-08 07:45:57.511928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.559 [2024-11-08 07:45:57.511939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.820 [2024-11-08 07:45:57.517218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.820 [2024-11-08 07:45:57.517249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.820 [2024-11-08 07:45:57.517260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.522471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.522504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.522515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.527746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.527787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.527798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.532923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.532954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.532965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.538186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.538218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.538229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.543373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.543405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.543416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.548624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.548654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.548665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.553887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.553918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.553928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.559152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.559182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.559209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.564433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.564464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.564490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.569641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.569672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.569683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.574895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.574928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.574954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.580150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.580180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.580190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.585348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.585380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.585391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.590543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.590574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.590601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.595777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.595809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.595819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.600958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.601000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.601010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.606175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.606206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.606216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.611425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.611458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.611468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.616619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.616649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.616675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.621853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.621885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.621895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.627101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.627132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.627159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.632336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.632369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.632379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.637603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.637633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.637643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.642852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.642883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.821 [2024-11-08 07:45:57.642909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.821 [2024-11-08 07:45:57.648139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.821 [2024-11-08 07:45:57.648170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.648180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.653379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.653410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.653421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.658687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.658718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.658744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.663914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.663945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.663956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.669190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.669221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.669231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.674443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.674474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.674501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.679683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.679717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.679727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.684960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.685007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.685017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.690220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.690384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.690398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.695625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.695659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.695670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.700835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.700867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.700877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.706086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.706116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.706126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.711315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.711346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.711357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.716491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.716522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.716532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.721689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.721843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.721858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.727104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.727136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.727146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.731151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.731183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.731193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.735208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.735240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.735251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.739294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.739325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.739336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.743262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.743295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.743306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.747259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.747292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.747303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.751213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.751244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.751255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.755176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.755207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.755218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.759120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.759150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.759161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.763040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.822 [2024-11-08 07:45:57.763071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.822 [2024-11-08 07:45:57.763081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.822 [2024-11-08 07:45:57.766956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.823 [2024-11-08 07:45:57.767001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.823 [2024-11-08 07:45:57.767012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.823 [2024-11-08 07:45:57.770908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.823 [2024-11-08 07:45:57.770940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.823 [2024-11-08 07:45:57.770951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.823 [2024-11-08 07:45:57.774976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:39.823 [2024-11-08 07:45:57.775037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.823 [2024-11-08 07:45:57.775049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.084 [2024-11-08 07:45:57.779044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.084 [2024-11-08 07:45:57.779076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.084 [2024-11-08 07:45:57.779087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.084 [2024-11-08 07:45:57.783008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.084 [2024-11-08 07:45:57.783041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.084 [2024-11-08 07:45:57.783052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.084 [2024-11-08 07:45:57.787056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.084 [2024-11-08 07:45:57.787086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.084 [2024-11-08 07:45:57.787098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.084 [2024-11-08 07:45:57.790976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.084 [2024-11-08 07:45:57.791020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.084 [2024-11-08 07:45:57.791032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.084 [2024-11-08 07:45:57.794948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.084 [2024-11-08 07:45:57.794989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.084 [2024-11-08 07:45:57.795003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.084 [2024-11-08 07:45:57.798940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.084 [2024-11-08 07:45:57.798973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.084 [2024-11-08 07:45:57.798993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.084 [2024-11-08 07:45:57.802911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.084 [2024-11-08 07:45:57.802943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.084 [2024-11-08 07:45:57.802955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.084 [2024-11-08 07:45:57.806911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.084 [2024-11-08 07:45:57.806944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.084 [2024-11-08 07:45:57.806955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.084 [2024-11-08 07:45:57.810882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.084 [2024-11-08 07:45:57.810916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.084 [2024-11-08 07:45:57.810928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.084 [2024-11-08 07:45:57.814887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.084 [2024-11-08 07:45:57.814918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.084 [2024-11-08 07:45:57.814929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.084 [2024-11-08 07:45:57.818848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.084 [2024-11-08 07:45:57.818880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.084 [2024-11-08 07:45:57.818891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.084 [2024-11-08 07:45:57.822852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.084 [2024-11-08 07:45:57.822884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.084 [2024-11-08 07:45:57.822894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.084 [2024-11-08 07:45:57.826789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.826819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.826830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.830694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.830726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.830737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.834640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.834670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.834681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.838543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.838574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.838584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.842450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.842480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.842491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.846369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.846401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.846412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.850287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.850319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.850329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.854210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.854242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.854252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.858132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.858163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.858174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.862097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.862129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.862139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.866033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.866063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.866073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.869957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.870000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.870011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.873890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.873922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.873933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.877856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.877889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.877900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.881810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.881842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.881852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.885797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.885832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.885843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.889716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.889748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.889758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.893712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.893743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.893753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.897674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.897705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.897716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.901610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.901642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.901652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.905634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.905667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.905678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.909550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.909581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.909591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.913447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.913479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.913489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.085 [2024-11-08 07:45:57.917323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.085 [2024-11-08 07:45:57.917354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.085 [2024-11-08 07:45:57.917365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.921234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.921265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.921275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.925163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.925193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.925204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.929111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.929141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.929151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.933023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.933052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.933062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.937007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.937035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.937045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.940943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.941101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.941115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.945035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.945069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.945080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.948962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.949104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.949125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.953066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.953098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.953109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.956954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.957100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.957114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.961020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.961051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.961061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.964900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.965061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.965075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.969014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.969045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.969056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.972967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.973007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.973018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.976948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.976993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.977004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.980992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.981022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.981032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.984991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.985021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.985032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.989020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.989056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.989067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.993109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.993144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.993155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:57.997162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:57.997196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:57.997208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:58.001210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:58.001244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:58.001255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:58.005501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:58.005534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:58.005545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:58.009554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:58.009586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.086 [2024-11-08 07:45:58.009596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.086 [2024-11-08 07:45:58.013544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.086 [2024-11-08 07:45:58.013576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.087 [2024-11-08 07:45:58.013587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.087 [2024-11-08 07:45:58.017564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.087 [2024-11-08 07:45:58.017597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.087 [2024-11-08 07:45:58.017608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.087 [2024-11-08 07:45:58.021618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.087 [2024-11-08 07:45:58.021651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.087 [2024-11-08 07:45:58.021662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.087 [2024-11-08 07:45:58.025634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.087 [2024-11-08 07:45:58.025667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.087 [2024-11-08 07:45:58.025678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.087 [2024-11-08 07:45:58.029622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.087 [2024-11-08 07:45:58.029654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.087 [2024-11-08 07:45:58.029665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.087 [2024-11-08 07:45:58.033597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.087 [2024-11-08 07:45:58.033630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.087 [2024-11-08 07:45:58.033641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.087 [2024-11-08 07:45:58.037571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.087 [2024-11-08 07:45:58.037604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.087 [2024-11-08 07:45:58.037616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.041570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.349 [2024-11-08 07:45:58.041603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.349 [2024-11-08 07:45:58.041614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.045575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.349 [2024-11-08 07:45:58.045606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.349 [2024-11-08 07:45:58.045616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.049666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.349 [2024-11-08 07:45:58.049699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.349 [2024-11-08 07:45:58.049710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.053658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.349 [2024-11-08 07:45:58.053690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.349 [2024-11-08 07:45:58.053700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.057636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.349 [2024-11-08 07:45:58.057669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.349 [2024-11-08 07:45:58.057679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.061743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.349 [2024-11-08 07:45:58.061775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.349 [2024-11-08 07:45:58.061785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.065783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.349 [2024-11-08 07:45:58.065817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.349 [2024-11-08 07:45:58.065828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.069883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.349 [2024-11-08 07:45:58.069919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.349 [2024-11-08 07:45:58.069931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.073925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.349 [2024-11-08 07:45:58.073959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.349 [2024-11-08 07:45:58.073970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.077952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.349 [2024-11-08 07:45:58.077998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.349 [2024-11-08 07:45:58.078010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.082032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.349 [2024-11-08 07:45:58.082063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.349 [2024-11-08 07:45:58.082074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.086014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.349 [2024-11-08 07:45:58.086045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.349 [2024-11-08 07:45:58.086056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.089975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.349 [2024-11-08 07:45:58.090022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.349 [2024-11-08 07:45:58.090033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.094065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.349 [2024-11-08 07:45:58.094099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.349 [2024-11-08 07:45:58.094110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.349 [2024-11-08 07:45:58.098067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.098102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.098113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.102496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.102532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.102543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.106566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.106599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.106619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.110584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.110628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.110655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.114629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.114677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.114688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.118675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.118708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.118720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.122674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.122707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.122719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.126683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.126715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.126726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.130674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.130706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.130717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.134959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.135004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.135015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.138889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.138920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.138931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.142825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.142857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.142867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.146756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.146787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.146798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.150751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.150783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.150794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.154740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.154772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.154783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.158673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.158704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.158715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.162771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.162804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.162816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.166865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.166898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.166908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.170775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.170806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.170816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.174863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.174895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.174906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.178954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.178998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.179010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.182920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.182953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.182963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.186842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.186873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.350 [2024-11-08 07:45:58.186884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.350 [2024-11-08 07:45:58.190724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.350 [2024-11-08 07:45:58.190755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.190765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.194706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.194737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.194748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.198560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.198593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.198604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.202526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.202557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.202568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.206495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.206528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.206539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.210498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.210530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.210540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.214463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.214495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.214506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.218433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.218464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.218474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.222340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.222372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.222382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.226308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.226339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.226349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.230229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.230260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.230271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.234199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.234231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.234242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.238181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.238212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.238223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.242161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.242192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.242202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.246128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.246159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.246169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.250068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.250099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.250110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.254064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.254095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.254106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.258023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.258055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.258065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.261962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.262118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.262131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.266015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.266047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.266057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.269995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.270026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.270036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.273997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.274027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.351 [2024-11-08 07:45:58.274037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.351 [2024-11-08 07:45:58.277867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.351 [2024-11-08 07:45:58.278027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.352 [2024-11-08 07:45:58.278042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.352 [2024-11-08 07:45:58.281916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.352 [2024-11-08 07:45:58.282067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.352 [2024-11-08 07:45:58.282082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.352 [2024-11-08 07:45:58.286012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.352 [2024-11-08 07:45:58.286044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.352 [2024-11-08 07:45:58.286055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.352 [2024-11-08 07:45:58.289924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.352 [2024-11-08 07:45:58.290068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.352 [2024-11-08 07:45:58.290081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.352 [2024-11-08 07:45:58.294010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.352 [2024-11-08 07:45:58.294041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.352 [2024-11-08 07:45:58.294052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.352 [2024-11-08 07:45:58.298019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.352 [2024-11-08 07:45:58.298050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.352 [2024-11-08 07:45:58.298061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.352 [2024-11-08 07:45:58.301929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.352 [2024-11-08 07:45:58.302090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.352 [2024-11-08 07:45:58.302104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.306171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.306204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.306216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.310173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.310204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.310215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.314097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.314129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.314141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.318091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.318122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.318141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.322085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.322115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.322125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.326063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.326099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.326109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.330018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.330047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.330058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.333937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.334094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.334108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.338030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.338062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.338073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.341975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.342017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.342027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.345909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.346056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.346070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.349963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.350103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.350117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.354052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.354084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.354095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.357970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.358011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.358022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.361945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.362094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.362108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.366009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.366041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.366052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.369911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.370054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.370068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.373973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.374015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.374026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.377991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.378020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.378031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.614 7269.00 IOPS, 908.62 MiB/s [2024-11-08T07:45:58.575Z] [2024-11-08 07:45:58.381689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.381824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.381839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.384571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.384600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.614 [2024-11-08 07:45:58.384611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.614 [2024-11-08 07:45:58.388541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.614 [2024-11-08 07:45:58.388575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.388585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.392503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.392536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.392547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.396466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.396499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.396509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.400443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.400474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.400485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.404389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.404420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.404430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.408372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.408403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.408413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.412304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.412335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.412345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.416271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.416302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.416313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.420265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.420296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.420307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.424217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.424248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.424259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.428086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.428116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.428127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.431969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.432011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.432022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.435997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.436026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.436037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.439921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.439951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.439962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.443957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.443995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.444006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.447903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.447934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.447944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.451839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.451870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.451881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.455827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.455858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.455869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.459892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.459923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.459934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.463827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.463857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.463867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.467782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.467814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.467825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.471756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.471788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.471798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.475733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.475775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.615 [2024-11-08 07:45:58.475786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.615 [2024-11-08 07:45:58.479687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.615 [2024-11-08 07:45:58.479718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.479732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.483617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.483649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.483660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.487553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.487583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.487594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.491512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.491544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.491554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.495432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.495463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.495473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.499421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.499452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.499462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.503336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.503368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.503380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.507260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.507291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.507301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.511212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.511247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.511259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.515110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.515141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.515151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.519123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.519156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.519168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.523046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.523075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.523086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.527025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.527054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.527065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.530998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.531027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.531038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.534902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.534933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.534943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.538879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.538912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.538923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.542817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.542849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.542859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.546710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.546740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.546751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.550641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.550672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.550683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.554620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.554663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.554674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.558602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.558642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.558652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.562583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.562623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.562634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.616 [2024-11-08 07:45:58.566648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.616 [2024-11-08 07:45:58.566682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.616 [2024-11-08 07:45:58.566693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.570751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.570784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.570795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.574761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.574794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.574805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.578751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.578783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.578794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.582765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.582797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.582808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.586740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.586772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.586784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.590695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.590726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.590738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.594694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.594727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.594738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.598676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.598707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.598719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.602629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.602676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.602687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.606558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.606589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.606600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.610540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.610572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.610582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.614487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.614517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.614528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.618368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.618399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.618410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.622309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.622341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.622351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.626242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.626272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.626283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.630222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.630253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.630263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.634133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.634164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.634175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.638087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.638118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.638129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.642085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.642115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.642127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.879 [2024-11-08 07:45:58.646042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.879 [2024-11-08 07:45:58.646073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.879 [2024-11-08 07:45:58.646083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.650005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.650035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.650045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.653873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.654020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.654043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.657941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.658090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.658103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.662054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.662086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.662097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.665989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.666031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.666041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.669930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.670078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.670091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.674098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.674131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.674142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.678066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.678098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.678110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.682023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.682054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.682065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.685940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.686096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.686110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.690038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.690069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.690080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.694039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.694069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.694079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.697967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.698007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.698018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.701952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.702100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.702116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.706166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.706206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.706217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.710207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.710240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.710251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.714179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.714212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.714222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.718139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.718172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.718183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.722174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.722206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.722217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.880 [2024-11-08 07:45:58.726128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.880 [2024-11-08 07:45:58.726161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.880 [2024-11-08 07:45:58.726171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.730130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.730162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.730173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.734134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.734167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.734177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.738105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.738136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.738147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.742066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.742096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.742106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.745957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.746117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.746132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.750051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.750081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.750092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.753917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.754067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.754081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.758031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.758063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.758074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.762001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.762031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.762041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.765887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.766057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.766072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.770067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.770100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.770111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.774019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.774049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.774060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.777965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.778126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.778140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.782129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.782163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.782173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.786105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.786138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.786148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.790060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.790090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.790100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.793989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.794031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.794041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.797925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.798080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.798094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.802063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.802094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.802105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.806069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.806101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.806113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.810020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.881 [2024-11-08 07:45:58.810051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.881 [2024-11-08 07:45:58.810061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.881 [2024-11-08 07:45:58.813936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.882 [2024-11-08 07:45:58.814083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.882 [2024-11-08 07:45:58.814098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.882 [2024-11-08 07:45:58.818023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.882 [2024-11-08 07:45:58.818054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.882 [2024-11-08 07:45:58.818066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.882 [2024-11-08 07:45:58.821966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.882 [2024-11-08 07:45:58.822020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.882 [2024-11-08 07:45:58.822030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:40.882 [2024-11-08 07:45:58.825921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.882 [2024-11-08 07:45:58.826072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.882 [2024-11-08 07:45:58.826086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:40.882 [2024-11-08 07:45:58.830004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.882 [2024-11-08 07:45:58.830035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.882 [2024-11-08 07:45:58.830046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:40.882 [2024-11-08 07:45:58.834034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:40.882 [2024-11-08 07:45:58.834066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:40.882 [2024-11-08 07:45:58.834078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.838095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.838125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.838136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.842097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.842129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.842141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.846090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.846122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.846133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.850060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.850090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.850101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.853941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.854098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.854113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.858037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.858068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.858081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.861949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.862101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.862116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.866087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.866119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.866131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.870103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.870137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.870148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.873992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.874033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.874043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.877879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.878048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.878062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.881997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.882039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.882049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.885925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.886079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.886093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.890043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.890074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.890085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.894019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.894049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.894059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.897998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.898039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.898050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.901893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.902051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.902064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.905997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.906029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.144 [2024-11-08 07:45:58.906040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.144 [2024-11-08 07:45:58.909901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.144 [2024-11-08 07:45:58.910053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.910068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.914033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.914065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.914076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.917908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.918073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.918087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.922035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.922067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.922077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.925957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.926113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.926127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.930063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.930094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.930105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.934013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.934043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.934054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.937897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.938053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.938066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.941935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.942091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.942105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.946040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.946072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.946082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.949994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.950033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.950044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.953891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.954041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.954055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.957907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.958061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.958075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.962012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.962044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.962055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.965925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.966094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.966110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.970026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.970057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.970067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.973902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.974055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.974068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.977958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.978115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.978128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.982070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.982102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.982113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.985958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.986106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.986120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.990024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.990055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.990066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.993952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.994106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.994120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:58.998082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:58.998113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:58.998123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:59.002024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.145 [2024-11-08 07:45:59.002055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.145 [2024-11-08 07:45:59.002065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.145 [2024-11-08 07:45:59.005968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.006009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.006020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.009846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.009997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.010012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.013956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.014105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.014118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.018035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.018065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.018076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.021962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.022105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.022119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.026073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.026104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.026115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.030040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.030069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.030080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.034026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.034056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.034067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.037928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.038090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.038105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.042032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.042063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.042073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.045943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.046088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.046102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.050066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.050096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.050106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.054039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.054068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.054078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.057956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.058103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.058118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.062004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.062035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.062046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.065913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.066070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.066085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.070011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.070042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.070052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.073868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.074019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.074033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.077937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.078104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.078120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.082027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.082058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.082069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.085910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.086085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.086099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.090005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.090036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.090047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.093896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.094038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.146 [2024-11-08 07:45:59.094065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.146 [2024-11-08 07:45:59.098277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.146 [2024-11-08 07:45:59.098313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.147 [2024-11-08 07:45:59.098324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.102343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.102375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.102386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.106738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.106771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.106782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.110897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.110931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.110942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.115021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.115053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.115065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.119117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.119148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.119160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.123379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.123417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.123429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.127768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.127802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.127812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.131851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.131884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.131894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.136005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.136038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.136048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.140035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.140067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.140078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.144066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.144097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.144108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.148103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.148134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.148144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.152231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.152261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.152272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.156179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.156210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.156221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.160151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.160182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.160192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.164105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.164136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.164147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.168080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.168112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.168123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.172073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.172103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.172114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.176033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.176062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.409 [2024-11-08 07:45:59.176072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.409 [2024-11-08 07:45:59.179999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.409 [2024-11-08 07:45:59.180030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.180041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.184030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.184059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.184069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.187974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.188025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.188035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.191965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.192113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.192127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.196119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.196152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.196163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.200082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.200112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.200122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.204130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.204160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.204170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.208130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.208161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.208172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.212105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.212135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.212146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.216091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.216121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.216132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.220054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.220083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.220093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.224020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.224049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.224059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.228066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.228096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.228106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.232021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.232050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.232060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.236023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.236051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.236062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.239948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.240106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.240123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.244116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.244149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.244160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.248057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.248087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.248098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.252051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.252080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.252091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.256008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.256038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.256049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.259955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.260108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.260130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.264019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.264051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.264061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.268055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.268086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.268097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.272030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.272059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.272070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.410 [2024-11-08 07:45:59.275967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.410 [2024-11-08 07:45:59.276008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.410 [2024-11-08 07:45:59.276018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.279892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.280049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.280066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.284002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.284034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.284045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.287968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.288010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.288021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.291883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.292037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.292050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.296012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.296042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.296053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.299928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.300081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.300104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.304046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.304076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.304088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.308042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.308072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.308082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.311992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.312021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.312031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.315955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.316113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.316131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.320101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.320133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.320143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.324193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.324225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.324235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.328189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.328220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.328231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.332211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.332242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.332253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.336244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.336274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.336284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.340242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.340272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.340283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.344275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.344306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.344316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.348261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.348292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.348303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.352245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.352277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.352287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.356185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.356216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.356226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.360160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.360190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.360200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.411 [2024-11-08 07:45:59.364206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.411 [2024-11-08 07:45:59.364239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.411 [2024-11-08 07:45:59.364250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.672 [2024-11-08 07:45:59.368268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.672 [2024-11-08 07:45:59.368299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.672 [2024-11-08 07:45:59.368309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:41.672 [2024-11-08 07:45:59.372263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.672 [2024-11-08 07:45:59.372295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.672 [2024-11-08 07:45:59.372307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:41.672 [2024-11-08 07:45:59.376270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.672 [2024-11-08 07:45:59.376300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.672 [2024-11-08 07:45:59.376311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:41.672 7514.00 IOPS, 939.25 MiB/s [2024-11-08T07:45:59.633Z] [2024-11-08 07:45:59.380893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x132e400) 00:18:41.672 [2024-11-08 07:45:59.380927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:41.672 [2024-11-08 07:45:59.380938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:41.672 00:18:41.672 Latency(us) 00:18:41.672 [2024-11-08T07:45:59.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.672 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:41.672 nvme0n1 : 2.00 7510.47 938.81 0.00 0.00 2127.73 631.95 12420.63 00:18:41.672 [2024-11-08T07:45:59.633Z] =================================================================================================================== 00:18:41.672 [2024-11-08T07:45:59.633Z] Total : 7510.47 938.81 0.00 0.00 2127.73 631.95 12420.63 00:18:41.672 { 00:18:41.672 "results": [ 00:18:41.672 { 00:18:41.672 "job": "nvme0n1", 00:18:41.672 "core_mask": "0x2", 00:18:41.672 "workload": "randread", 00:18:41.672 "status": "finished", 00:18:41.672 "queue_depth": 16, 00:18:41.672 "io_size": 131072, 00:18:41.672 "runtime": 2.00307, 00:18:41.672 "iops": 7510.4714263605365, 00:18:41.672 "mibps": 938.8089282950671, 00:18:41.672 "io_failed": 0, 00:18:41.672 "io_timeout": 0, 00:18:41.672 "avg_latency_us": 2127.7257791114316, 00:18:41.672 "min_latency_us": 631.9542857142857, 00:18:41.672 "max_latency_us": 12420.63238095238 00:18:41.672 } 00:18:41.672 ], 00:18:41.672 "core_count": 1 00:18:41.672 } 00:18:41.672 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:41.672 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:41.672 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:41.672 | .driver_specific 00:18:41.672 | .nvme_error 00:18:41.672 | .status_code 00:18:41.672 | .command_transient_transport_error' 00:18:41.672 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:41.931 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 486 > 0 )) 00:18:41.931 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80039 00:18:41.931 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80039 ']' 00:18:41.931 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80039 00:18:41.931 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:41.931 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:41.931 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80039 00:18:41.931 killing process with pid 80039 00:18:41.931 Received shutdown signal, test time was about 2.000000 seconds 00:18:41.931 00:18:41.931 Latency(us) 00:18:41.931 [2024-11-08T07:45:59.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.931 [2024-11-08T07:45:59.892Z] =================================================================================================================== 00:18:41.931 [2024-11-08T07:45:59.892Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.931 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:41.931 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:41.931 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80039' 00:18:41.931 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80039 00:18:41.931 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80039 00:18:42.190 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:42.190 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:42.190 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:42.190 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:42.190 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:42.190 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:42.190 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80086 00:18:42.190 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80086 /var/tmp/bperf.sock 00:18:42.190 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80086 ']' 00:18:42.190 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:42.190 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:42.190 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:42.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:42.190 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:42.190 07:45:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:42.190 [2024-11-08 07:46:00.000457] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:18:42.190 [2024-11-08 07:46:00.000756] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80086 ] 00:18:42.190 [2024-11-08 07:46:00.141748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.448 [2024-11-08 07:46:00.204673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.448 [2024-11-08 07:46:00.280004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:42.449 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:42.449 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:18:42.449 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:42.449 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:42.708 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:42.708 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.708 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:42.708 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.708 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:42.708 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:42.967 nvme0n1 00:18:42.967 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:42.967 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.967 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:42.967 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.967 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:42.967 07:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:43.227 Running I/O for 2 seconds... 00:18:43.227 [2024-11-08 07:46:00.992969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fcdd0 00:18:43.227 [2024-11-08 07:46:00.994065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:00.994103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.005175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fd640 00:18:43.227 [2024-11-08 07:46:01.006478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:01.006716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.017963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fdeb0 00:18:43.227 [2024-11-08 07:46:01.019072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:01.019102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.029965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fe720 00:18:43.227 [2024-11-08 07:46:01.031051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:01.031080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.042205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ff3c8 00:18:43.227 [2024-11-08 07:46:01.043275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:01.043306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.059479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ff3c8 00:18:43.227 [2024-11-08 07:46:01.061488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:01.061627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.071808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fe720 00:18:43.227 [2024-11-08 07:46:01.073848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:01.073878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.084046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fdeb0 00:18:43.227 [2024-11-08 07:46:01.086032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:01.086060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.096238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fd640 00:18:43.227 [2024-11-08 07:46:01.098209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:01.098236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.108322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fcdd0 00:18:43.227 [2024-11-08 07:46:01.110317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:01.110344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.120648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fc560 00:18:43.227 [2024-11-08 07:46:01.122597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:01.122631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.132704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fbcf0 00:18:43.227 [2024-11-08 07:46:01.134656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:01.134685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.144859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fb480 00:18:43.227 [2024-11-08 07:46:01.146797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:01.146825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.157246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fac10 00:18:43.227 [2024-11-08 07:46:01.159183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:01.159215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.170137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fa3a0 00:18:43.227 [2024-11-08 07:46:01.172217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.227 [2024-11-08 07:46:01.172247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:43.227 [2024-11-08 07:46:01.183248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f9b30 00:18:43.487 [2024-11-08 07:46:01.185140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.185170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.196110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f92c0 00:18:43.487 [2024-11-08 07:46:01.197964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.197999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.208678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f8a50 00:18:43.487 [2024-11-08 07:46:01.210569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.210734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.221475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f81e0 00:18:43.487 [2024-11-08 07:46:01.223272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.223302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.233560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f7970 00:18:43.487 [2024-11-08 07:46:01.235330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.235359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.245627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f7100 00:18:43.487 [2024-11-08 07:46:01.247395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.247425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.257667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f6890 00:18:43.487 [2024-11-08 07:46:01.259428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.259458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.269744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f6020 00:18:43.487 [2024-11-08 07:46:01.271569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.271598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.282022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f57b0 00:18:43.487 [2024-11-08 07:46:01.283720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.283749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.294152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f4f40 00:18:43.487 [2024-11-08 07:46:01.295896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.295925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.306297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f46d0 00:18:43.487 [2024-11-08 07:46:01.308030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.308057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.318392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f3e60 00:18:43.487 [2024-11-08 07:46:01.320099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.320128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.330631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f35f0 00:18:43.487 [2024-11-08 07:46:01.332319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.332347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.342757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f2d80 00:18:43.487 [2024-11-08 07:46:01.344490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.344517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.355060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f2510 00:18:43.487 [2024-11-08 07:46:01.356859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.356889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.367318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f1ca0 00:18:43.487 [2024-11-08 07:46:01.368939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.369089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.379673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f1430 00:18:43.487 [2024-11-08 07:46:01.381458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.381623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.392253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f0bc0 00:18:43.487 [2024-11-08 07:46:01.394045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.394198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.404885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f0350 00:18:43.487 [2024-11-08 07:46:01.406619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.406787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.417544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166efae0 00:18:43.487 [2024-11-08 07:46:01.419308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.419468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.430242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ef270 00:18:43.487 [2024-11-08 07:46:01.431993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.432138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:43.487 [2024-11-08 07:46:01.442811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166eea00 00:18:43.487 [2024-11-08 07:46:01.444544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.487 [2024-11-08 07:46:01.444693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:43.748 [2024-11-08 07:46:01.455877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ee190 00:18:43.748 [2024-11-08 07:46:01.457586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.748 [2024-11-08 07:46:01.457750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:43.748 [2024-11-08 07:46:01.469053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ed920 00:18:43.748 [2024-11-08 07:46:01.470708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.748 [2024-11-08 07:46:01.470862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:43.748 [2024-11-08 07:46:01.482319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ed0b0 00:18:43.748 [2024-11-08 07:46:01.483981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.748 [2024-11-08 07:46:01.484150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:43.748 [2024-11-08 07:46:01.495398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ec840 00:18:43.748 [2024-11-08 07:46:01.497053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.748 [2024-11-08 07:46:01.497226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:43.748 [2024-11-08 07:46:01.508668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ebfd0 00:18:43.748 [2024-11-08 07:46:01.510516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.748 [2024-11-08 07:46:01.510683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:43.748 [2024-11-08 07:46:01.522230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166eb760 00:18:43.748 [2024-11-08 07:46:01.523769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.748 [2024-11-08 07:46:01.523799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:43.748 [2024-11-08 07:46:01.535066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166eaef0 00:18:43.748 [2024-11-08 07:46:01.536628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.748 [2024-11-08 07:46:01.536659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:43.748 [2024-11-08 07:46:01.547742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ea680 00:18:43.748 [2024-11-08 07:46:01.549215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.748 [2024-11-08 07:46:01.549245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:43.749 [2024-11-08 07:46:01.560374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e9e10 00:18:43.749 [2024-11-08 07:46:01.561801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.749 [2024-11-08 07:46:01.561832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:43.749 [2024-11-08 07:46:01.573035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e95a0 00:18:43.749 [2024-11-08 07:46:01.574445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.749 [2024-11-08 07:46:01.574476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:43.749 [2024-11-08 07:46:01.585569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e8d30 00:18:43.749 [2024-11-08 07:46:01.586974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.749 [2024-11-08 07:46:01.587013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:43.749 [2024-11-08 07:46:01.597994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e84c0 00:18:43.749 [2024-11-08 07:46:01.599395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.749 [2024-11-08 07:46:01.599424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.749 [2024-11-08 07:46:01.610616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e7c50 00:18:43.749 [2024-11-08 07:46:01.612276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.749 [2024-11-08 07:46:01.612305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:43.749 [2024-11-08 07:46:01.623133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e73e0 00:18:43.749 [2024-11-08 07:46:01.624496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.749 [2024-11-08 07:46:01.624524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:43.749 [2024-11-08 07:46:01.635297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e6b70 00:18:43.749 [2024-11-08 07:46:01.636641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.749 [2024-11-08 07:46:01.636671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:43.749 [2024-11-08 07:46:01.647485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e6300 00:18:43.749 [2024-11-08 07:46:01.648813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.749 [2024-11-08 07:46:01.648843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:43.749 [2024-11-08 07:46:01.659624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e5a90 00:18:43.749 [2024-11-08 07:46:01.660938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.749 [2024-11-08 07:46:01.660967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:43.749 [2024-11-08 07:46:01.671703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e5220 00:18:43.749 [2024-11-08 07:46:01.673000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.749 [2024-11-08 07:46:01.673029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:43.749 [2024-11-08 07:46:01.683775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e49b0 00:18:43.749 [2024-11-08 07:46:01.685074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.749 [2024-11-08 07:46:01.685102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:43.749 [2024-11-08 07:46:01.695845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e4140 00:18:43.749 [2024-11-08 07:46:01.697128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.749 [2024-11-08 07:46:01.697154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:44.009 [2024-11-08 07:46:01.708087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e38d0 00:18:44.009 [2024-11-08 07:46:01.709350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.009 [2024-11-08 07:46:01.709377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:44.009 [2024-11-08 07:46:01.720334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e3060 00:18:44.009 [2024-11-08 07:46:01.721576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.009 [2024-11-08 07:46:01.721604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:44.009 [2024-11-08 07:46:01.732898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e27f0 00:18:44.009 [2024-11-08 07:46:01.734123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.009 [2024-11-08 07:46:01.734151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:44.009 [2024-11-08 07:46:01.745017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e1f80 00:18:44.009 [2024-11-08 07:46:01.746177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.009 [2024-11-08 07:46:01.746205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:44.009 [2024-11-08 07:46:01.757332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e1710 00:18:44.009 [2024-11-08 07:46:01.758482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.009 [2024-11-08 07:46:01.758509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:44.009 [2024-11-08 07:46:01.769566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e0ea0 00:18:44.009 [2024-11-08 07:46:01.770708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.009 [2024-11-08 07:46:01.770738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:44.009 [2024-11-08 07:46:01.781746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e0630 00:18:44.009 [2024-11-08 07:46:01.782875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.009 [2024-11-08 07:46:01.782905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:44.009 [2024-11-08 07:46:01.793774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166dfdc0 00:18:44.009 [2024-11-08 07:46:01.794887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.009 [2024-11-08 07:46:01.794916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.009 [2024-11-08 07:46:01.805878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166df550 00:18:44.009 [2024-11-08 07:46:01.806981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.009 [2024-11-08 07:46:01.807018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:44.009 [2024-11-08 07:46:01.817915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166dece0 00:18:44.009 [2024-11-08 07:46:01.819007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.009 [2024-11-08 07:46:01.819036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:44.009 [2024-11-08 07:46:01.829908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166de470 00:18:44.009 [2024-11-08 07:46:01.831025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.009 [2024-11-08 07:46:01.831054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:44.009 [2024-11-08 07:46:01.847212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ddc00 00:18:44.009 [2024-11-08 07:46:01.849186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.009 [2024-11-08 07:46:01.849213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:44.010 [2024-11-08 07:46:01.859238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166de470 00:18:44.010 [2024-11-08 07:46:01.861224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.010 [2024-11-08 07:46:01.861251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:44.010 [2024-11-08 07:46:01.871501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166dece0 00:18:44.010 [2024-11-08 07:46:01.873497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.010 [2024-11-08 07:46:01.873526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:44.010 [2024-11-08 07:46:01.883702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166df550 00:18:44.010 [2024-11-08 07:46:01.885707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.010 [2024-11-08 07:46:01.885734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:44.010 [2024-11-08 07:46:01.895789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166dfdc0 00:18:44.010 [2024-11-08 07:46:01.897774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.010 [2024-11-08 07:46:01.897801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:44.010 [2024-11-08 07:46:01.907887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e0630 00:18:44.010 [2024-11-08 07:46:01.909863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.010 [2024-11-08 07:46:01.909889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:44.010 [2024-11-08 07:46:01.920064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e0ea0 00:18:44.010 [2024-11-08 07:46:01.922010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.010 [2024-11-08 07:46:01.922036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:44.010 [2024-11-08 07:46:01.932105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e1710 00:18:44.010 [2024-11-08 07:46:01.934051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.010 [2024-11-08 07:46:01.934078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:44.010 [2024-11-08 07:46:01.944277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e1f80 00:18:44.010 [2024-11-08 07:46:01.946205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.010 [2024-11-08 07:46:01.946233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:44.010 [2024-11-08 07:46:01.956342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e27f0 00:18:44.010 [2024-11-08 07:46:01.958254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.010 [2024-11-08 07:46:01.958280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:01.968619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e3060 00:18:44.270 [2024-11-08 07:46:01.970516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:01.970545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:01.981044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e38d0 00:18:44.270 20369.00 IOPS, 79.57 MiB/s [2024-11-08T07:46:02.231Z] [2024-11-08 07:46:01.982926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:01.982954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:01.993311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e4140 00:18:44.270 [2024-11-08 07:46:01.995187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:01.995217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:02.005642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e49b0 00:18:44.270 [2024-11-08 07:46:02.007508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:02.007538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:02.018120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e5220 00:18:44.270 [2024-11-08 07:46:02.019949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:02.019984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:02.030595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e5a90 00:18:44.270 [2024-11-08 07:46:02.032422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:02.032454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:02.042738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e6300 00:18:44.270 [2024-11-08 07:46:02.044531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:02.044559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:02.054803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e6b70 00:18:44.270 [2024-11-08 07:46:02.056620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:02.056649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:02.066936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e73e0 00:18:44.270 [2024-11-08 07:46:02.068748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:02.068777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:02.079358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e7c50 00:18:44.270 [2024-11-08 07:46:02.081123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:02.081150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:02.091434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e84c0 00:18:44.270 [2024-11-08 07:46:02.093184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:02.093212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:02.103523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e8d30 00:18:44.270 [2024-11-08 07:46:02.105260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:02.105288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:02.115643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e95a0 00:18:44.270 [2024-11-08 07:46:02.117364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:02.117392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:02.127679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166e9e10 00:18:44.270 [2024-11-08 07:46:02.129386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:02.129414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:02.139968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ea680 00:18:44.270 [2024-11-08 07:46:02.141658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.270 [2024-11-08 07:46:02.141686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:44.270 [2024-11-08 07:46:02.152220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166eaef0 00:18:44.271 [2024-11-08 07:46:02.153890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.271 [2024-11-08 07:46:02.153917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:44.271 [2024-11-08 07:46:02.164380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166eb760 00:18:44.271 [2024-11-08 07:46:02.166034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.271 [2024-11-08 07:46:02.166059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:44.271 [2024-11-08 07:46:02.176565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ebfd0 00:18:44.271 [2024-11-08 07:46:02.178236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.271 [2024-11-08 07:46:02.178264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:44.271 [2024-11-08 07:46:02.189452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ec840 00:18:44.271 [2024-11-08 07:46:02.191120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.271 [2024-11-08 07:46:02.191150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:44.271 [2024-11-08 07:46:02.202334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ed0b0 00:18:44.271 [2024-11-08 07:46:02.203942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.271 [2024-11-08 07:46:02.203971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:44.271 [2024-11-08 07:46:02.215072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ed920 00:18:44.271 [2024-11-08 07:46:02.216648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.271 [2024-11-08 07:46:02.216677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:44.271 [2024-11-08 07:46:02.227759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ee190 00:18:44.531 [2024-11-08 07:46:02.229362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.531 [2024-11-08 07:46:02.229392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:44.531 [2024-11-08 07:46:02.240303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166eea00 00:18:44.531 [2024-11-08 07:46:02.241867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.531 [2024-11-08 07:46:02.241894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:44.531 [2024-11-08 07:46:02.252378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ef270 00:18:44.531 [2024-11-08 07:46:02.253929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.531 [2024-11-08 07:46:02.253957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:44.531 [2024-11-08 07:46:02.264598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166efae0 00:18:44.531 [2024-11-08 07:46:02.266130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.531 [2024-11-08 07:46:02.266157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:44.531 [2024-11-08 07:46:02.276993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f0350 00:18:44.531 [2024-11-08 07:46:02.278523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.531 [2024-11-08 07:46:02.278551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:44.531 [2024-11-08 07:46:02.289055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f0bc0 00:18:44.531 [2024-11-08 07:46:02.290560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.531 [2024-11-08 07:46:02.290587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:44.531 [2024-11-08 07:46:02.301169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f1430 00:18:44.531 [2024-11-08 07:46:02.302682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.531 [2024-11-08 07:46:02.302715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:44.531 [2024-11-08 07:46:02.313376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f1ca0 00:18:44.531 [2024-11-08 07:46:02.314852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.531 [2024-11-08 07:46:02.314880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:44.531 [2024-11-08 07:46:02.325544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f2510 00:18:44.531 [2024-11-08 07:46:02.327014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.531 [2024-11-08 07:46:02.327040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:44.531 [2024-11-08 07:46:02.337587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f2d80 00:18:44.531 [2024-11-08 07:46:02.339038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.531 [2024-11-08 07:46:02.339063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:44.531 [2024-11-08 07:46:02.349767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f35f0 00:18:44.531 [2024-11-08 07:46:02.351210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.531 [2024-11-08 07:46:02.351239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:44.531 [2024-11-08 07:46:02.361961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f3e60 00:18:44.531 [2024-11-08 07:46:02.363385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.531 [2024-11-08 07:46:02.363412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:44.531 [2024-11-08 07:46:02.374255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f46d0 00:18:44.531 [2024-11-08 07:46:02.375651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.531 [2024-11-08 07:46:02.375679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:44.531 [2024-11-08 07:46:02.386375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f4f40 00:18:44.531 [2024-11-08 07:46:02.387753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.531 [2024-11-08 07:46:02.387782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:44.531 [2024-11-08 07:46:02.398519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f57b0 00:18:44.531 [2024-11-08 07:46:02.399885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-11-08 07:46:02.399914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:44.532 [2024-11-08 07:46:02.410702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f6020 00:18:44.532 [2024-11-08 07:46:02.412053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-11-08 07:46:02.412082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:44.532 [2024-11-08 07:46:02.422698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f6890 00:18:44.532 [2024-11-08 07:46:02.424036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-11-08 07:46:02.424064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:44.532 [2024-11-08 07:46:02.434684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f7100 00:18:44.532 [2024-11-08 07:46:02.435999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-11-08 07:46:02.436030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:44.532 [2024-11-08 07:46:02.446852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f7970 00:18:44.532 [2024-11-08 07:46:02.448160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-11-08 07:46:02.448187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:44.532 [2024-11-08 07:46:02.458938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f81e0 00:18:44.532 [2024-11-08 07:46:02.460231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-11-08 07:46:02.460259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:44.532 [2024-11-08 07:46:02.470952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f8a50 00:18:44.532 [2024-11-08 07:46:02.472227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-11-08 07:46:02.472257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:44.532 [2024-11-08 07:46:02.483045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f92c0 00:18:44.532 [2024-11-08 07:46:02.484298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.532 [2024-11-08 07:46:02.484327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:44.792 [2024-11-08 07:46:02.495462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f9b30 00:18:44.792 [2024-11-08 07:46:02.496703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.792 [2024-11-08 07:46:02.496732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:44.792 [2024-11-08 07:46:02.507599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fa3a0 00:18:44.792 [2024-11-08 07:46:02.508825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.792 [2024-11-08 07:46:02.508853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.519757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fac10 00:18:44.793 [2024-11-08 07:46:02.520965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.521000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.531734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fb480 00:18:44.793 [2024-11-08 07:46:02.532924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.532952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.543783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fbcf0 00:18:44.793 [2024-11-08 07:46:02.544957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.544993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.555920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fc560 00:18:44.793 [2024-11-08 07:46:02.557088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.557116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.568032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fcdd0 00:18:44.793 [2024-11-08 07:46:02.569178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.569208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.580255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fd640 00:18:44.793 [2024-11-08 07:46:02.581391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.581421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.592598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fdeb0 00:18:44.793 [2024-11-08 07:46:02.593715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.593743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.604609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fe720 00:18:44.793 [2024-11-08 07:46:02.605711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.605739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.616973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ff3c8 00:18:44.793 [2024-11-08 07:46:02.618082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.618110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.635572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166ff3c8 00:18:44.793 [2024-11-08 07:46:02.637656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.637685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.648250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fe720 00:18:44.793 [2024-11-08 07:46:02.650289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.650317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.660815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fdeb0 00:18:44.793 [2024-11-08 07:46:02.662834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.662863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.673268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fd640 00:18:44.793 [2024-11-08 07:46:02.675270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.675298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.685740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fcdd0 00:18:44.793 [2024-11-08 07:46:02.687711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.687738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.698167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fc560 00:18:44.793 [2024-11-08 07:46:02.700136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.700163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.710500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fbcf0 00:18:44.793 [2024-11-08 07:46:02.712439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.712466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.722774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fb480 00:18:44.793 [2024-11-08 07:46:02.724704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.724732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.735059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fac10 00:18:44.793 [2024-11-08 07:46:02.736962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.736997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:44.793 [2024-11-08 07:46:02.747450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166fa3a0 00:18:44.793 [2024-11-08 07:46:02.749366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.793 [2024-11-08 07:46:02.749396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:45.054 [2024-11-08 07:46:02.760197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f9b30 00:18:45.054 [2024-11-08 07:46:02.762086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.054 [2024-11-08 07:46:02.762113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:45.054 [2024-11-08 07:46:02.772715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f92c0 00:18:45.054 [2024-11-08 07:46:02.774617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.054 [2024-11-08 07:46:02.774646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:45.054 [2024-11-08 07:46:02.785461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f8a50 00:18:45.054 [2024-11-08 07:46:02.787325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.054 [2024-11-08 07:46:02.787354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:45.054 [2024-11-08 07:46:02.797944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f81e0 00:18:45.054 [2024-11-08 07:46:02.799804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.054 [2024-11-08 07:46:02.799832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:45.054 [2024-11-08 07:46:02.810291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f7970 00:18:45.054 [2024-11-08 07:46:02.812151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.054 [2024-11-08 07:46:02.812178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:45.054 [2024-11-08 07:46:02.822557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f7100 00:18:45.054 [2024-11-08 07:46:02.824379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.054 [2024-11-08 07:46:02.824415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:45.054 [2024-11-08 07:46:02.834778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f6890 00:18:45.054 [2024-11-08 07:46:02.836573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.054 [2024-11-08 07:46:02.836601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:45.054 [2024-11-08 07:46:02.846957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f6020 00:18:45.054 [2024-11-08 07:46:02.848748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.054 [2024-11-08 07:46:02.848776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:45.054 [2024-11-08 07:46:02.859089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f57b0 00:18:45.054 [2024-11-08 07:46:02.860850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.054 [2024-11-08 07:46:02.860878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:45.054 [2024-11-08 07:46:02.871128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f4f40 00:18:45.054 [2024-11-08 07:46:02.872877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.054 [2024-11-08 07:46:02.872905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:45.054 [2024-11-08 07:46:02.883270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f46d0 00:18:45.054 [2024-11-08 07:46:02.885018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.054 [2024-11-08 07:46:02.885043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:45.054 [2024-11-08 07:46:02.895417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f3e60 00:18:45.054 [2024-11-08 07:46:02.897166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.054 [2024-11-08 07:46:02.897198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:45.054 [2024-11-08 07:46:02.907526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f35f0 00:18:45.054 [2024-11-08 07:46:02.909244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.054 [2024-11-08 07:46:02.909270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:45.054 [2024-11-08 07:46:02.919552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f2d80 00:18:45.054 [2024-11-08 07:46:02.921252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.054 [2024-11-08 07:46:02.921279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:45.055 [2024-11-08 07:46:02.931574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f2510 00:18:45.055 [2024-11-08 07:46:02.933256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.055 [2024-11-08 07:46:02.933284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:45.055 [2024-11-08 07:46:02.943719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f1ca0 00:18:45.055 [2024-11-08 07:46:02.945390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.055 [2024-11-08 07:46:02.945418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:45.055 [2024-11-08 07:46:02.955824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f1430 00:18:45.055 [2024-11-08 07:46:02.957479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.055 [2024-11-08 07:46:02.957506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:45.055 [2024-11-08 07:46:02.967967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f0bc0 00:18:45.055 [2024-11-08 07:46:02.969609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.055 [2024-11-08 07:46:02.969637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:45.055 [2024-11-08 07:46:02.980165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6750) with pdu=0x2000166f0350 00:18:45.055 [2024-11-08 07:46:02.982029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.055 [2024-11-08 07:46:02.982054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:45.055 20494.50 IOPS, 80.06 MiB/s 00:18:45.055 Latency(us) 00:18:45.055 [2024-11-08T07:46:03.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.055 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:45.055 nvme0n1 : 2.01 20536.10 80.22 0.00 0.00 6227.75 3105.16 23093.64 00:18:45.055 [2024-11-08T07:46:03.016Z] =================================================================================================================== 00:18:45.055 [2024-11-08T07:46:03.016Z] Total : 20536.10 80.22 0.00 0.00 6227.75 3105.16 23093.64 00:18:45.055 { 00:18:45.055 "results": [ 00:18:45.055 { 00:18:45.055 "job": "nvme0n1", 00:18:45.055 "core_mask": "0x2", 00:18:45.055 "workload": "randwrite", 00:18:45.055 "status": "finished", 00:18:45.055 "queue_depth": 128, 00:18:45.055 "io_size": 4096, 00:18:45.055 "runtime": 2.008317, 00:18:45.055 "iops": 20536.100625548657, 00:18:45.055 "mibps": 80.21914306854944, 00:18:45.055 "io_failed": 0, 00:18:45.055 "io_timeout": 0, 00:18:45.055 "avg_latency_us": 6227.752860248723, 00:18:45.055 "min_latency_us": 3105.158095238095, 00:18:45.055 "max_latency_us": 23093.638095238097 00:18:45.055 } 00:18:45.055 ], 00:18:45.055 "core_count": 1 00:18:45.055 } 00:18:45.055 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:45.324 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:45.324 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:45.324 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:45.324 | .driver_specific 00:18:45.324 | .nvme_error 00:18:45.324 | .status_code 00:18:45.324 | .command_transient_transport_error' 00:18:45.324 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 161 > 0 )) 00:18:45.324 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80086 00:18:45.324 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80086 ']' 00:18:45.324 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80086 00:18:45.324 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:45.324 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:45.324 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80086 00:18:45.587 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:45.587 killing process with pid 80086 00:18:45.587 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:45.587 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80086' 00:18:45.587 Received shutdown signal, test time was about 2.000000 seconds 00:18:45.587 00:18:45.587 Latency(us) 00:18:45.587 [2024-11-08T07:46:03.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.587 [2024-11-08T07:46:03.548Z] =================================================================================================================== 00:18:45.587 [2024-11-08T07:46:03.548Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.587 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80086 00:18:45.587 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80086 00:18:45.847 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:45.848 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:45.848 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:45.848 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:45.848 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:45.848 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:45.848 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80139 00:18:45.848 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80139 /var/tmp/bperf.sock 00:18:45.848 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80139 ']' 00:18:45.848 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:45.848 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:45.848 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:45.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:45.848 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:45.848 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.848 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:45.848 Zero copy mechanism will not be used. 00:18:45.848 [2024-11-08 07:46:03.612713] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:18:45.848 [2024-11-08 07:46:03.612785] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80139 ] 00:18:45.848 [2024-11-08 07:46:03.751591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.108 [2024-11-08 07:46:03.814714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.108 [2024-11-08 07:46:03.890736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:46.108 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:46.108 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:18:46.108 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:46.108 07:46:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:46.367 07:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:46.367 07:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.367 07:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:46.367 07:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.367 07:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:46.367 07:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:46.627 nvme0n1 00:18:46.627 07:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:46.627 07:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.627 07:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:46.627 07:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.627 07:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:46.628 07:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:46.628 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:46.628 Zero copy mechanism will not be used. 00:18:46.628 Running I/O for 2 seconds... 00:18:46.628 [2024-11-08 07:46:04.563745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.628 [2024-11-08 07:46:04.563850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.628 [2024-11-08 07:46:04.563887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.628 [2024-11-08 07:46:04.568787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.628 [2024-11-08 07:46:04.568991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.628 [2024-11-08 07:46:04.569021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.628 [2024-11-08 07:46:04.573554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.628 [2024-11-08 07:46:04.573727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.628 [2024-11-08 07:46:04.573748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.628 [2024-11-08 07:46:04.578236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.628 [2024-11-08 07:46:04.578427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.628 [2024-11-08 07:46:04.578448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.628 [2024-11-08 07:46:04.583029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.628 [2024-11-08 07:46:04.583226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.628 [2024-11-08 07:46:04.583247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.587858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.588076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.588098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.592672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.592850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.592871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.597395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.597593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.597612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.602060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.602221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.602241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.606790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.606980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.607013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.611570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.611735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.611754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.616350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.616497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.616517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.621090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.621252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.621279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.625752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.625940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.625961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.630325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.630496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.630516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.634972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.635158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.635177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.639630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.639795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.639815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.644392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.644551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.644571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.649118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.649259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.649280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.653778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.653971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.654003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.658396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.658586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.658614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.663097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.663242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.663262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.667815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.667993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.668013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.672630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.672799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.672819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.677372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.677541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.677560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.681949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.682139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.682159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.686709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.686868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.686889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.691376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.691525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.691545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.696081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.890 [2024-11-08 07:46:04.696237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.890 [2024-11-08 07:46:04.696257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.890 [2024-11-08 07:46:04.700878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.701076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.701097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.705612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.705790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.705809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.710255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.710429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.710449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.714963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.715137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.715157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.719668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.719816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.719835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.724415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.724612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.724631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.729135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.729292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.729312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.733763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.733916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.733937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.738381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.738553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.738573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.743026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.743202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.743221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.747705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.747838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.747857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.752528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.752672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.752691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.757233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.757422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.757442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.761863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.762065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.762085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.766509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.766704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.766724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.771245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.771383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.771403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.775896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.776099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.776119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.780594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.780770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.780790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.785265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.785413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.785433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.789929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.790134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.790153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.794585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.794755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.794775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.799247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.799389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.799409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.803939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.804128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.804148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.808681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.808826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.808846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.891 [2024-11-08 07:46:04.813357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.891 [2024-11-08 07:46:04.813533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.891 [2024-11-08 07:46:04.813553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.892 [2024-11-08 07:46:04.818066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.892 [2024-11-08 07:46:04.818205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.892 [2024-11-08 07:46:04.818224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.892 [2024-11-08 07:46:04.822752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.892 [2024-11-08 07:46:04.822916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.892 [2024-11-08 07:46:04.822936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.892 [2024-11-08 07:46:04.827435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.892 [2024-11-08 07:46:04.827585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.892 [2024-11-08 07:46:04.827605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.892 [2024-11-08 07:46:04.832136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.892 [2024-11-08 07:46:04.832300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.892 [2024-11-08 07:46:04.832320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.892 [2024-11-08 07:46:04.836827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.892 [2024-11-08 07:46:04.836972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.892 [2024-11-08 07:46:04.837003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.892 [2024-11-08 07:46:04.841472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.892 [2024-11-08 07:46:04.841657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.892 [2024-11-08 07:46:04.841677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.892 [2024-11-08 07:46:04.846234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:46.892 [2024-11-08 07:46:04.846418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.892 [2024-11-08 07:46:04.846438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.153 [2024-11-08 07:46:04.851013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.153 [2024-11-08 07:46:04.851165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.153 [2024-11-08 07:46:04.851185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.153 [2024-11-08 07:46:04.855746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.153 [2024-11-08 07:46:04.855930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.153 [2024-11-08 07:46:04.855950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.153 [2024-11-08 07:46:04.860509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.153 [2024-11-08 07:46:04.860652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.153 [2024-11-08 07:46:04.860672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.153 [2024-11-08 07:46:04.865202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.153 [2024-11-08 07:46:04.865374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.153 [2024-11-08 07:46:04.865393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.153 [2024-11-08 07:46:04.869868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.153 [2024-11-08 07:46:04.870037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.153 [2024-11-08 07:46:04.870058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.153 [2024-11-08 07:46:04.874519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.153 [2024-11-08 07:46:04.874722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.153 [2024-11-08 07:46:04.874742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.153 [2024-11-08 07:46:04.879287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.153 [2024-11-08 07:46:04.879456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.153 [2024-11-08 07:46:04.879474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.153 [2024-11-08 07:46:04.884033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.153 [2024-11-08 07:46:04.884205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.153 [2024-11-08 07:46:04.884225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.153 [2024-11-08 07:46:04.888728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.153 [2024-11-08 07:46:04.888920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.153 [2024-11-08 07:46:04.888939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.153 [2024-11-08 07:46:04.893408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.153 [2024-11-08 07:46:04.893552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.153 [2024-11-08 07:46:04.893571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.153 [2024-11-08 07:46:04.898024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.153 [2024-11-08 07:46:04.898180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.153 [2024-11-08 07:46:04.898200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.153 [2024-11-08 07:46:04.902723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.153 [2024-11-08 07:46:04.902897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.153 [2024-11-08 07:46:04.902916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.153 [2024-11-08 07:46:04.907368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.153 [2024-11-08 07:46:04.907533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.907551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.912015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.912187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.912206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.916620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.916812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.916831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.921287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.921464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.921483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.925929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.926130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.926150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.930586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.930762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.930782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.935213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.935395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.935414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.939851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.940056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.940076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.944769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.944920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.944943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.949537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.949716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.949736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.954295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.954449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.954469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.959022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.959223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.959244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.963956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.964131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.964151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.969611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.969758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.969779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.974383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.974528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.974548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.979162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.979336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.979356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.983893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.984079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.984099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.988669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.988817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.988837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.993420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.993568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.993588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:04.998105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:04.998276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:04.998296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:05.002850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:05.003028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:05.003047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:05.007542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:05.007706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:05.007726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:05.012306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:05.012457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:05.012478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:05.017126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:05.017325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:05.017345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:05.021844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:05.022013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.154 [2024-11-08 07:46:05.022033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.154 [2024-11-08 07:46:05.026590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.154 [2024-11-08 07:46:05.026744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.026764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.031282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.031427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.031447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.036023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.036200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.036219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.040713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.040965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.040996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.045405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.045565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.045585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.050097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.050238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.050257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.054903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.055095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.055115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.059661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.059842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.059863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.064375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.064539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.064559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.069158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.069349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.069369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.073874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.074059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.074079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.078562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.078751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.078770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.083193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.083334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.083355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.087866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.088061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.088081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.092497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.092703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.092724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.097407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.097598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.097618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.102262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.102395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.102415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.155 [2024-11-08 07:46:05.107084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.155 [2024-11-08 07:46:05.107236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.155 [2024-11-08 07:46:05.107257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.416 [2024-11-08 07:46:05.111804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.416 [2024-11-08 07:46:05.111966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.416 [2024-11-08 07:46:05.112010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.416 [2024-11-08 07:46:05.116437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.416 [2024-11-08 07:46:05.116634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.416 [2024-11-08 07:46:05.116655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.416 [2024-11-08 07:46:05.121092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.416 [2024-11-08 07:46:05.121241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.416 [2024-11-08 07:46:05.121261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.416 [2024-11-08 07:46:05.125720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.416 [2024-11-08 07:46:05.125917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.416 [2024-11-08 07:46:05.125936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.416 [2024-11-08 07:46:05.130394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.416 [2024-11-08 07:46:05.130602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.416 [2024-11-08 07:46:05.130646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.416 [2024-11-08 07:46:05.135330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.416 [2024-11-08 07:46:05.135518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.416 [2024-11-08 07:46:05.135539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.416 [2024-11-08 07:46:05.140154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.416 [2024-11-08 07:46:05.140302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.416 [2024-11-08 07:46:05.140322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.416 [2024-11-08 07:46:05.144870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.145077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.145098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.149595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.149793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.149813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.154407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.154587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.154615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.159057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.159241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.159262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.163749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.163894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.163913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.168503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.168643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.168663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.173095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.173245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.173265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.177676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.177820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.177840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.182369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.182521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.182540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.186959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.187167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.187186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.191563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.191725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.191749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.196280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.196478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.196498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.201111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.201254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.201274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.205687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.205839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.205859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.210317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.210506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.210525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.215013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.215180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.215201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.219748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.219907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.219927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.224484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.224625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.224644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.229147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.229296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.229316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.233728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.233902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.233922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.238340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.238562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.238581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.243302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.243443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.243463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.248022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.248173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.248193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.253112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.253274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.253294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.257879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.417 [2024-11-08 07:46:05.258046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.417 [2024-11-08 07:46:05.258065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.417 [2024-11-08 07:46:05.263025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.263197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.263218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.267853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.267994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.268028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.272837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.273026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.273047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.277657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.277799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.277819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.282446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.282615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.282635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.287242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.287377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.287397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.291909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.292075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.292095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.296686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.296923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.296942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.301574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.301754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.301774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.306254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.306433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.306453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.310909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.311123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.311144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.315701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.315872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.315892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.320446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.320660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.320679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.325314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.325473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.325492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.330007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.330183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.330202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.334665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.334857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.334877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.339365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.339590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.339610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.344232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.344395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.344414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.348916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.349105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.349124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.353570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.353719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.353739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.358234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.358385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.358404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.362861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.363088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.363108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.367689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.367851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.367871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.418 [2024-11-08 07:46:05.372472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.418 [2024-11-08 07:46:05.372717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.418 [2024-11-08 07:46:05.372737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.679 [2024-11-08 07:46:05.377122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.679 [2024-11-08 07:46:05.377283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.679 [2024-11-08 07:46:05.377304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.679 [2024-11-08 07:46:05.381807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.679 [2024-11-08 07:46:05.382027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.382047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.386441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.386675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.386695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.391411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.391590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.391611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.396097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.396263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.396283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.400725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.400902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.400921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.405361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.405559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.405579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.410233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.410379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.410398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.414841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.415051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.415071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.419534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.419721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.419740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.424221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.424389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.424409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.428883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.429111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.429132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.433665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.433832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.433851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.438308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.438457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.438477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.442926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.443124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.443144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.447636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.447807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.447827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.452352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.452585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.452605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.457135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.457329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.457348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.461794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.461964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.461995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.466437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.466640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.466660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.471189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.471336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.471355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.475898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.476091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.476111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.480738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.480902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.480923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.485362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.485510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.485530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.489990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.680 [2024-11-08 07:46:05.490178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.680 [2024-11-08 07:46:05.490198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.680 [2024-11-08 07:46:05.494586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.494774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.494793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.499297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.499448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.499467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.504043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.504196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.504215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.508638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.508820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.508840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.513258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.513420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.513439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.517860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.518048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.518067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.522475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.522655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.522675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.527146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.527287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.527307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.531781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.531958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.531990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.536471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.536612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.536632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.541091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.541235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.541255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.545677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.545824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.545843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.550358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.550568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.550588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.555139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.555301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.555320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.559803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.559996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.560027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.681 6547.00 IOPS, 818.38 MiB/s [2024-11-08T07:46:05.642Z] [2024-11-08 07:46:05.565255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.565397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.565418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.570031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.570259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.570467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.574920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.575090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.575112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.579614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.579805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.579826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.584364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.584536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.584557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.589010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.589147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.589167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.593616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.593855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.593875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.598468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.598659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.598678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.603181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.681 [2024-11-08 07:46:05.603359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.681 [2024-11-08 07:46:05.603378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.681 [2024-11-08 07:46:05.607818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.682 [2024-11-08 07:46:05.607952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.682 [2024-11-08 07:46:05.607971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.682 [2024-11-08 07:46:05.612456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.682 [2024-11-08 07:46:05.612588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.682 [2024-11-08 07:46:05.612608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.682 [2024-11-08 07:46:05.617079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.682 [2024-11-08 07:46:05.617213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.682 [2024-11-08 07:46:05.617233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.682 [2024-11-08 07:46:05.621695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.682 [2024-11-08 07:46:05.621893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.682 [2024-11-08 07:46:05.621912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.682 [2024-11-08 07:46:05.626502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.682 [2024-11-08 07:46:05.626670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.682 [2024-11-08 07:46:05.626690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.682 [2024-11-08 07:46:05.631151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.682 [2024-11-08 07:46:05.631283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.682 [2024-11-08 07:46:05.631302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.682 [2024-11-08 07:46:05.635805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.682 [2024-11-08 07:46:05.635986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.682 [2024-11-08 07:46:05.636020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.943 [2024-11-08 07:46:05.640531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.943 [2024-11-08 07:46:05.640685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.943 [2024-11-08 07:46:05.640704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.943 [2024-11-08 07:46:05.645266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.943 [2024-11-08 07:46:05.645462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.943 [2024-11-08 07:46:05.645482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.943 [2024-11-08 07:46:05.650097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.943 [2024-11-08 07:46:05.650227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.943 [2024-11-08 07:46:05.650247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.943 [2024-11-08 07:46:05.654717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.943 [2024-11-08 07:46:05.654856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.943 [2024-11-08 07:46:05.654876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.943 [2024-11-08 07:46:05.659380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.943 [2024-11-08 07:46:05.659548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.943 [2024-11-08 07:46:05.659568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.943 [2024-11-08 07:46:05.664031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.943 [2024-11-08 07:46:05.664210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.943 [2024-11-08 07:46:05.664230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.943 [2024-11-08 07:46:05.668694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.668825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.668845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.673353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.673487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.673508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.677878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.678085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.678105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.682539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.682688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.682708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.687148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.687333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.687354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.691815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.692077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.692098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.696693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.696858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.696878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.701325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.701462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.701482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.705943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.706104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.706124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.710593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.710744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.710764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.715199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.715328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.715348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.719767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.719990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.720010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.724562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.724736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.724755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.729209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.729382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.729401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.733836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.734022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.734042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.738420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.738577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.738596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.743028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.743156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.743175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.747687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.747894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.747914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.752506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.752676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.752696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.757142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.757313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.757333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.761722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.761904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.761923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.766360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.766494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.766514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.770941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.771118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.771138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.775618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.775872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.775891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.780487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.944 [2024-11-08 07:46:05.780624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.944 [2024-11-08 07:46:05.780643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.944 [2024-11-08 07:46:05.785114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.785270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.785289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.789734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.789897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.789917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.794426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.794584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.794603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.799075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.799205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.799224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.803739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.803909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.803929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.808413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.808548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.808568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.813091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.813224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.813244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.817660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.817796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.817816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.822264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.822458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.822477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.827062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.827229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.827249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.831700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.831831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.831851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.836414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.836546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.836565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.841015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.841185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.841205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.845608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.845768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.845788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.850229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.850363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.850383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.854867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.855047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.855068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.859432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.859628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.859648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.864118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.864281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.864301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.868655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.868826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.868846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.873243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.873400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.873420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.877856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.878003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.878033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.882466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.882624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.882655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.887085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.887214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.887234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.891683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.891817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.891837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:47.945 [2024-11-08 07:46:05.896420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:47.945 [2024-11-08 07:46:05.896576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.945 [2024-11-08 07:46:05.896597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.901096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.901244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.901264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.905675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.905797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.905817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.910373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.910582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.910601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.915217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.915350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.915370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.919859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.920069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.920089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.924528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.924699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.924718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.929141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.929284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.929303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.933750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.934004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.934023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.938553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.938716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.938736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.943212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.943342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.943362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.947763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.947943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.947963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.952487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.952621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.952641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.957134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.957291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.957310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.961683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.961853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.961873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.966294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.966428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.966447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.970863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.971052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.207 [2024-11-08 07:46:05.971072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.207 [2024-11-08 07:46:05.975553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.207 [2024-11-08 07:46:05.975707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:05.975727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:05.980142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:05.980325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:05.980344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:05.984796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:05.984956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:05.984989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:05.989365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:05.989534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:05.989554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:05.993947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:05.994112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:05.994132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:05.998599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:05.998754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:05.998775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.003215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.003350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.003370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.007870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.008093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.008113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.012697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.012869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.012889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.017359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.017531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.017550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.021971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.022116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.022136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.026502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.026672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.026691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.031150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.031280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.031300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.035815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.036026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.036046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.040604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.040777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.040797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.045264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.045427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.045446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.049865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.050068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.050089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.054486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.054662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.054681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.059139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.059274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.059293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.063826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.063958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.063991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.068495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.068627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.068647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.073278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.073439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.073459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.077965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.078137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.078157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.082772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.082893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.082912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.088309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.088447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.208 [2024-11-08 07:46:06.088467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.208 [2024-11-08 07:46:06.093142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.208 [2024-11-08 07:46:06.093295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.093315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.209 [2024-11-08 07:46:06.097887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.209 [2024-11-08 07:46:06.098048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.098068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.209 [2024-11-08 07:46:06.102666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.209 [2024-11-08 07:46:06.102865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.102885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.209 [2024-11-08 07:46:06.107453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.209 [2024-11-08 07:46:06.107614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.107635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.209 [2024-11-08 07:46:06.112197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.209 [2024-11-08 07:46:06.112357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.112376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.209 [2024-11-08 07:46:06.116835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.209 [2024-11-08 07:46:06.117013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.117033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.209 [2024-11-08 07:46:06.121476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.209 [2024-11-08 07:46:06.121726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.121745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.209 [2024-11-08 07:46:06.126389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.209 [2024-11-08 07:46:06.126525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.126546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.209 [2024-11-08 07:46:06.131012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.209 [2024-11-08 07:46:06.131186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.131205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.209 [2024-11-08 07:46:06.135655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.209 [2024-11-08 07:46:06.135789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.135809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.209 [2024-11-08 07:46:06.140359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.209 [2024-11-08 07:46:06.140544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.140563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.209 [2024-11-08 07:46:06.145224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.209 [2024-11-08 07:46:06.145374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.145394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.209 [2024-11-08 07:46:06.149992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.209 [2024-11-08 07:46:06.150133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.150152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.209 [2024-11-08 07:46:06.154655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.209 [2024-11-08 07:46:06.154818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.154839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.209 [2024-11-08 07:46:06.159268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.209 [2024-11-08 07:46:06.159531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.209 [2024-11-08 07:46:06.159552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.470 [2024-11-08 07:46:06.164290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.470 [2024-11-08 07:46:06.164429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.470 [2024-11-08 07:46:06.164449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.470 [2024-11-08 07:46:06.169040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.470 [2024-11-08 07:46:06.169199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.470 [2024-11-08 07:46:06.169219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.470 [2024-11-08 07:46:06.173783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.470 [2024-11-08 07:46:06.173914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.470 [2024-11-08 07:46:06.173934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.470 [2024-11-08 07:46:06.178519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.470 [2024-11-08 07:46:06.178714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.470 [2024-11-08 07:46:06.178735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.470 [2024-11-08 07:46:06.183426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.183586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.183607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.188150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.188323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.188343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.192910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.193060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.193081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.197591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.197764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.197783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.202300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.202490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.202510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.207259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.207399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.207421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.211986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.212207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.212228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.216722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.216899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.216919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.221460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.221706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.221726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.226363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.226535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.226555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.231010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.231178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.231198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.235621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.235756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.235775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.240308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.240502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.240521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.245114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.245281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.245300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.249770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.249923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.249943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.254474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.254691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.254712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.259149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.259323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.259343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.263852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.264105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.264125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.268824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.269038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.269344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.273728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.273996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.274211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.278926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.279172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.279447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.283901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.284191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.284376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.289255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.289490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.289643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.294295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.294513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.294692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.471 [2024-11-08 07:46:06.299227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.471 [2024-11-08 07:46:06.299521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.471 [2024-11-08 07:46:06.299689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.304314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.304556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.304709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.309294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.309540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.309672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.314302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.314503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.314805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.319350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.319567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.319714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.324379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.324628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.324782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.329336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.329549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.329788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.334197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.334402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.334651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.339048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.339308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.339536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.344095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.344227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.344249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.348805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.349000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.349021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.353582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.353749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.353768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.358295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.358467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.358487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.362938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.363126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.363147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.367632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.367840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.367861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.372488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.372641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.372661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.377114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.377246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.377266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.381733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.381903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.381922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.386389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.386523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.386543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.391030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.391210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.391230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.395707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.395963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.395983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.400578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.400712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.400731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.405191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.405346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.405365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.409765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.409897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.409917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.414373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.414511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.414533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.472 [2024-11-08 07:46:06.419001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.472 [2024-11-08 07:46:06.419173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.472 [2024-11-08 07:46:06.419193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.473 [2024-11-08 07:46:06.423661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.473 [2024-11-08 07:46:06.423875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.473 [2024-11-08 07:46:06.423896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.733 [2024-11-08 07:46:06.428643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.733 [2024-11-08 07:46:06.428803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.733 [2024-11-08 07:46:06.428823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.733 [2024-11-08 07:46:06.433304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.733 [2024-11-08 07:46:06.433471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.733 [2024-11-08 07:46:06.433492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.733 [2024-11-08 07:46:06.437913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.733 [2024-11-08 07:46:06.438108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.733 [2024-11-08 07:46:06.438127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.733 [2024-11-08 07:46:06.442556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.733 [2024-11-08 07:46:06.442761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.733 [2024-11-08 07:46:06.442782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.733 [2024-11-08 07:46:06.447277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.733 [2024-11-08 07:46:06.447493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.733 [2024-11-08 07:46:06.447513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.733 [2024-11-08 07:46:06.452113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.733 [2024-11-08 07:46:06.452247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.733 [2024-11-08 07:46:06.452267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.733 [2024-11-08 07:46:06.456742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.733 [2024-11-08 07:46:06.456895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.733 [2024-11-08 07:46:06.456914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.733 [2024-11-08 07:46:06.461352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.733 [2024-11-08 07:46:06.461506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.733 [2024-11-08 07:46:06.461526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.733 [2024-11-08 07:46:06.465966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.733 [2024-11-08 07:46:06.466132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.733 [2024-11-08 07:46:06.466152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.733 [2024-11-08 07:46:06.470533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.733 [2024-11-08 07:46:06.470717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.733 [2024-11-08 07:46:06.470737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.475231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.475363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.475383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.479905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.480099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.480118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.484582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.484750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.484769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.489153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.489322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.489342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.493753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.493995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.494016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.498669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.498790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.498810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.503272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.503407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.503427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.507957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.508165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.508185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.512697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.512832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.512852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.517237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.517466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.517487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.521992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.522126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.522146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.526573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.526754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.526773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.531262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.531416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.531436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.535919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.536072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.536092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.540528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.540735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.540755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.545414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.545603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.545623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.550039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.550205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.550224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.554689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.554824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.554843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:48.734 [2024-11-08 07:46:06.559294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22e6a90) with pdu=0x2000166ff3c8 00:18:48.734 [2024-11-08 07:46:06.559465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.734 [2024-11-08 07:46:06.559484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:48.734 6562.00 IOPS, 820.25 MiB/s 00:18:48.734 Latency(us) 00:18:48.734 [2024-11-08T07:46:06.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.734 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:48.734 nvme0n1 : 2.00 6560.69 820.09 0.00 0.00 2435.07 1856.85 6085.49 00:18:48.734 [2024-11-08T07:46:06.695Z] =================================================================================================================== 00:18:48.734 [2024-11-08T07:46:06.695Z] Total : 6560.69 820.09 0.00 0.00 2435.07 1856.85 6085.49 00:18:48.734 { 00:18:48.734 "results": [ 00:18:48.734 { 00:18:48.734 "job": "nvme0n1", 00:18:48.734 "core_mask": "0x2", 00:18:48.734 "workload": "randwrite", 00:18:48.734 "status": "finished", 00:18:48.734 "queue_depth": 16, 00:18:48.734 "io_size": 131072, 00:18:48.734 "runtime": 2.003449, 00:18:48.734 "iops": 6560.6860968260235, 00:18:48.734 "mibps": 820.0857621032529, 00:18:48.734 "io_failed": 0, 00:18:48.734 "io_timeout": 0, 00:18:48.734 "avg_latency_us": 2435.0742555719794, 00:18:48.734 "min_latency_us": 1856.8533333333332, 00:18:48.734 "max_latency_us": 6085.4857142857145 00:18:48.734 } 00:18:48.734 ], 00:18:48.734 "core_count": 1 00:18:48.734 } 00:18:48.734 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:48.734 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:48.734 | .driver_specific 00:18:48.734 | .nvme_error 00:18:48.734 | .status_code 00:18:48.734 | .command_transient_transport_error' 00:18:48.734 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:48.734 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:48.995 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 424 > 0 )) 00:18:48.995 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80139 00:18:48.995 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80139 ']' 00:18:48.995 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80139 00:18:48.995 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:48.995 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:48.995 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80139 00:18:48.995 killing process with pid 80139 00:18:48.995 Received shutdown signal, test time was about 2.000000 seconds 00:18:48.995 00:18:48.995 Latency(us) 00:18:48.995 [2024-11-08T07:46:06.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.995 [2024-11-08T07:46:06.956Z] =================================================================================================================== 00:18:48.995 [2024-11-08T07:46:06.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.995 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:48.995 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:48.995 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80139' 00:18:48.995 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80139 00:18:48.995 07:46:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80139 00:18:49.256 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79947 00:18:49.256 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 79947 ']' 00:18:49.256 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 79947 00:18:49.256 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:49.256 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:49.256 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79947 00:18:49.256 killing process with pid 79947 00:18:49.256 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:49.256 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:49.256 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79947' 00:18:49.256 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 79947 00:18:49.256 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 79947 00:18:49.516 00:18:49.516 real 0m16.356s 00:18:49.516 user 0m29.138s 00:18:49.516 sys 0m6.292s 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:49.516 ************************************ 00:18:49.516 END TEST nvmf_digest_error 00:18:49.516 ************************************ 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:49.516 rmmod nvme_tcp 00:18:49.516 rmmod nvme_fabrics 00:18:49.516 rmmod nvme_keyring 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:49.516 Process with pid 79947 is not found 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 79947 ']' 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 79947 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 79947 ']' 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 79947 00:18:49.516 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (79947) - No such process 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 79947 is not found' 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:49.516 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:18:49.776 00:18:49.776 real 0m34.548s 00:18:49.776 user 0m59.728s 00:18:49.776 sys 0m13.367s 00:18:49.776 ************************************ 00:18:49.776 END TEST nvmf_digest 00:18:49.776 ************************************ 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:49.776 07:46:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:50.036 07:46:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:50.036 07:46:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:50.036 07:46:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:50.036 07:46:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:50.036 07:46:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.037 ************************************ 00:18:50.037 START TEST nvmf_host_multipath 00:18:50.037 ************************************ 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:50.037 * Looking for test storage... 00:18:50.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:50.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.037 --rc genhtml_branch_coverage=1 00:18:50.037 --rc genhtml_function_coverage=1 00:18:50.037 --rc genhtml_legend=1 00:18:50.037 --rc geninfo_all_blocks=1 00:18:50.037 --rc geninfo_unexecuted_blocks=1 00:18:50.037 00:18:50.037 ' 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:50.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.037 --rc genhtml_branch_coverage=1 00:18:50.037 --rc genhtml_function_coverage=1 00:18:50.037 --rc genhtml_legend=1 00:18:50.037 --rc geninfo_all_blocks=1 00:18:50.037 --rc geninfo_unexecuted_blocks=1 00:18:50.037 00:18:50.037 ' 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:50.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.037 --rc genhtml_branch_coverage=1 00:18:50.037 --rc genhtml_function_coverage=1 00:18:50.037 --rc genhtml_legend=1 00:18:50.037 --rc geninfo_all_blocks=1 00:18:50.037 --rc geninfo_unexecuted_blocks=1 00:18:50.037 00:18:50.037 ' 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:50.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.037 --rc genhtml_branch_coverage=1 00:18:50.037 --rc genhtml_function_coverage=1 00:18:50.037 --rc genhtml_legend=1 00:18:50.037 --rc geninfo_all_blocks=1 00:18:50.037 --rc geninfo_unexecuted_blocks=1 00:18:50.037 00:18:50.037 ' 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.037 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.298 07:46:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:50.298 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:50.298 Cannot find device "nvmf_init_br" 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:50.298 Cannot find device "nvmf_init_br2" 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:50.298 Cannot find device "nvmf_tgt_br" 00:18:50.298 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:50.299 Cannot find device "nvmf_tgt_br2" 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:50.299 Cannot find device "nvmf_init_br" 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:50.299 Cannot find device "nvmf_init_br2" 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:50.299 Cannot find device "nvmf_tgt_br" 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:50.299 Cannot find device "nvmf_tgt_br2" 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:50.299 Cannot find device "nvmf_br" 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:50.299 Cannot find device "nvmf_init_if" 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:50.299 Cannot find device "nvmf_init_if2" 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:50.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:50.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:50.299 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:50.559 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:50.559 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:50.559 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:50.559 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:50.559 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:50.559 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:50.559 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:50.559 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:50.559 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:50.559 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:50.559 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:50.559 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:50.559 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:50.559 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:50.560 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:50.560 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:18:50.560 00:18:50.560 --- 10.0.0.3 ping statistics --- 00:18:50.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.560 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:50.560 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:50.560 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.110 ms 00:18:50.560 00:18:50.560 --- 10.0.0.4 ping statistics --- 00:18:50.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.560 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:50.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:50.560 00:18:50.560 --- 10.0.0.1 ping statistics --- 00:18:50.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.560 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:50.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:18:50.560 00:18:50.560 --- 10.0.0.2 ping statistics --- 00:18:50.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.560 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:50.560 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:50.820 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:50.820 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:50.820 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:50.820 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:50.820 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80466 00:18:50.820 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:50.820 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80466 00:18:50.820 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 80466 ']' 00:18:50.820 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.820 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:50.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.820 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.820 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:50.820 07:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:50.820 [2024-11-08 07:46:08.592564] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:18:50.820 [2024-11-08 07:46:08.592664] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.820 [2024-11-08 07:46:08.756178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:51.080 [2024-11-08 07:46:08.823732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.080 [2024-11-08 07:46:08.823796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.080 [2024-11-08 07:46:08.823820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.080 [2024-11-08 07:46:08.823841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.080 [2024-11-08 07:46:08.823857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.080 [2024-11-08 07:46:08.825187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.080 [2024-11-08 07:46:08.825203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.080 [2024-11-08 07:46:08.888106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:51.651 07:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:51.651 07:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:18:51.651 07:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:51.651 07:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:51.651 07:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:51.917 07:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.917 07:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80466 00:18:51.917 07:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:51.917 [2024-11-08 07:46:09.872528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.177 07:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:52.436 Malloc0 00:18:52.436 07:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:52.436 07:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:52.696 07:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:52.955 [2024-11-08 07:46:10.737433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:52.955 07:46:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:53.215 [2024-11-08 07:46:11.013772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:53.215 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80516 00:18:53.215 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:53.215 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.215 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80516 /var/tmp/bdevperf.sock 00:18:53.215 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 80516 ']' 00:18:53.215 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.215 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:53.215 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.215 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:53.215 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:53.474 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:53.474 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:18:53.474 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:53.733 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:53.992 Nvme0n1 00:18:53.992 07:46:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:54.251 Nvme0n1 00:18:54.251 07:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:54.251 07:46:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:55.632 07:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:55.632 07:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:55.632 07:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:55.892 07:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:55.892 07:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80466 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:55.892 07:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80554 00:18:55.892 07:46:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:02.512 07:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:02.512 07:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:02.512 07:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:02.512 07:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:02.512 Attaching 4 probes... 00:19:02.512 @path[10.0.0.3, 4421]: 19179 00:19:02.512 @path[10.0.0.3, 4421]: 19627 00:19:02.512 @path[10.0.0.3, 4421]: 19701 00:19:02.512 @path[10.0.0.3, 4421]: 19727 00:19:02.512 @path[10.0.0.3, 4421]: 19712 00:19:02.512 07:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:02.512 07:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:02.512 07:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:02.512 07:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:02.512 07:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:02.512 07:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:02.512 07:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80554 00:19:02.512 07:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:02.512 07:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:19:02.512 07:46:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:02.512 07:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:02.512 07:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:19:02.512 07:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80667 00:19:02.512 07:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80466 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:02.512 07:46:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:09.080 Attaching 4 probes... 00:19:09.080 @path[10.0.0.3, 4420]: 20827 00:19:09.080 @path[10.0.0.3, 4420]: 20864 00:19:09.080 @path[10.0.0.3, 4420]: 21047 00:19:09.080 @path[10.0.0.3, 4420]: 21058 00:19:09.080 @path[10.0.0.3, 4420]: 21009 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80667 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:09.080 07:46:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:09.339 07:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:09.339 07:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80466 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:09.339 07:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80785 00:19:09.339 07:46:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:15.909 Attaching 4 probes... 00:19:15.909 @path[10.0.0.3, 4421]: 15151 00:19:15.909 @path[10.0.0.3, 4421]: 19227 00:19:15.909 @path[10.0.0.3, 4421]: 19156 00:19:15.909 @path[10.0.0.3, 4421]: 19392 00:19:15.909 @path[10.0.0.3, 4421]: 19440 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80785 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:15.909 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:16.169 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:16.169 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80896 00:19:16.169 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80466 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:16.169 07:46:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:22.734 07:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:22.734 07:46:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:22.734 Attaching 4 probes... 00:19:22.734 00:19:22.734 00:19:22.734 00:19:22.734 00:19:22.734 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80896 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81010 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80466 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:22.734 07:46:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:29.307 07:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:29.307 07:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:29.307 07:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:29.307 07:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:29.307 Attaching 4 probes... 00:19:29.307 @path[10.0.0.3, 4421]: 19163 00:19:29.307 @path[10.0.0.3, 4421]: 19597 00:19:29.307 @path[10.0.0.3, 4421]: 19662 00:19:29.307 @path[10.0.0.3, 4421]: 19556 00:19:29.307 @path[10.0.0.3, 4421]: 19544 00:19:29.307 07:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:29.307 07:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:29.307 07:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:29.307 07:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:29.307 07:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:29.307 07:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:29.307 07:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81010 00:19:29.307 07:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:29.307 07:46:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:29.307 07:46:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:30.686 07:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:30.686 07:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81133 00:19:30.686 07:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80466 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:30.686 07:46:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:37.253 07:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:37.253 07:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:37.253 07:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:37.253 07:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:37.253 Attaching 4 probes... 00:19:37.253 @path[10.0.0.3, 4420]: 20135 00:19:37.253 @path[10.0.0.3, 4420]: 20486 00:19:37.253 @path[10.0.0.3, 4420]: 20421 00:19:37.253 @path[10.0.0.3, 4420]: 20406 00:19:37.253 @path[10.0.0.3, 4420]: 20492 00:19:37.253 07:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:37.253 07:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:37.253 07:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:37.253 07:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:37.253 07:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:37.253 07:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:37.253 07:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81133 00:19:37.253 07:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:37.253 07:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:37.253 [2024-11-08 07:46:54.766554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:37.253 07:46:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:37.253 07:46:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:43.822 07:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:43.822 07:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81313 00:19:43.822 07:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80466 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:43.822 07:47:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:50.400 Attaching 4 probes... 00:19:50.400 @path[10.0.0.3, 4421]: 19201 00:19:50.400 @path[10.0.0.3, 4421]: 19451 00:19:50.400 @path[10.0.0.3, 4421]: 19513 00:19:50.400 @path[10.0.0.3, 4421]: 19536 00:19:50.400 @path[10.0.0.3, 4421]: 19613 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81313 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80516 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 80516 ']' 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 80516 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80516 00:19:50.400 killing process with pid 80516 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80516' 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 80516 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 80516 00:19:50.400 { 00:19:50.400 "results": [ 00:19:50.400 { 00:19:50.400 "job": "Nvme0n1", 00:19:50.400 "core_mask": "0x4", 00:19:50.400 "workload": "verify", 00:19:50.400 "status": "terminated", 00:19:50.400 "verify_range": { 00:19:50.400 "start": 0, 00:19:50.400 "length": 16384 00:19:50.400 }, 00:19:50.400 "queue_depth": 128, 00:19:50.400 "io_size": 4096, 00:19:50.400 "runtime": 55.025665, 00:19:50.400 "iops": 8520.351366948496, 00:19:50.400 "mibps": 33.282622527142564, 00:19:50.400 "io_failed": 0, 00:19:50.400 "io_timeout": 0, 00:19:50.400 "avg_latency_us": 15003.125031852815, 00:19:50.400 "min_latency_us": 963.5352380952381, 00:19:50.400 "max_latency_us": 7030452.419047619 00:19:50.400 } 00:19:50.400 ], 00:19:50.400 "core_count": 1 00:19:50.400 } 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80516 00:19:50.400 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:50.400 [2024-11-08 07:46:11.074737] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:19:50.400 [2024-11-08 07:46:11.074817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80516 ] 00:19:50.400 [2024-11-08 07:46:11.226173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.400 [2024-11-08 07:46:11.280875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.400 [2024-11-08 07:46:11.328503] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:50.400 Running I/O for 90 seconds... 00:19:50.400 10503.00 IOPS, 41.03 MiB/s [2024-11-08T07:47:08.361Z] 10373.00 IOPS, 40.52 MiB/s [2024-11-08T07:47:08.361Z] 10184.67 IOPS, 39.78 MiB/s [2024-11-08T07:47:08.361Z] 10094.50 IOPS, 39.43 MiB/s [2024-11-08T07:47:08.361Z] 10046.80 IOPS, 39.25 MiB/s [2024-11-08T07:47:08.361Z] 10017.67 IOPS, 39.13 MiB/s [2024-11-08T07:47:08.361Z] 9994.57 IOPS, 39.04 MiB/s [2024-11-08T07:47:08.361Z] 9975.25 IOPS, 38.97 MiB/s [2024-11-08T07:47:08.361Z] [2024-11-08 07:46:20.422488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.400 [2024-11-08 07:46:20.422537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:50.400 [2024-11-08 07:46:20.422597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.400 [2024-11-08 07:46:20.422626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:50.400 [2024-11-08 07:46:20.422645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.400 [2024-11-08 07:46:20.422658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:50.400 [2024-11-08 07:46:20.422677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.400 [2024-11-08 07:46:20.422690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:50.400 [2024-11-08 07:46:20.422708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.400 [2024-11-08 07:46:20.422721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.422740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.401 [2024-11-08 07:46:20.422753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.422771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.401 [2024-11-08 07:46:20.422784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.422802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.401 [2024-11-08 07:46:20.422815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.422833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.422845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.422863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.422896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.422915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.422927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.422946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.422958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.422985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.422999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.401 [2024-11-08 07:46:20.423389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.401 [2024-11-08 07:46:20.423421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.401 [2024-11-08 07:46:20.423452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.401 [2024-11-08 07:46:20.423484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.401 [2024-11-08 07:46:20.423515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.401 [2024-11-08 07:46:20.423547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.401 [2024-11-08 07:46:20.423578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.401 [2024-11-08 07:46:20.423610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.401 [2024-11-08 07:46:20.423871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:50.401 [2024-11-08 07:46:20.423889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.423903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.423921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.423934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.423952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.423965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.423992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.402 [2024-11-08 07:46:20.424923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.424985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.424999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.425034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.425048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:50.402 [2024-11-08 07:46:20.425067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.402 [2024-11-08 07:46:20.425081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.403 [2024-11-08 07:46:20.425114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.403 [2024-11-08 07:46:20.425146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.403 [2024-11-08 07:46:20.425178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.403 [2024-11-08 07:46:20.425223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.425959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.425972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.427254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.403 [2024-11-08 07:46:20.427283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.427305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.403 [2024-11-08 07:46:20.427319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.427338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.403 [2024-11-08 07:46:20.427351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.427370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.403 [2024-11-08 07:46:20.427384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.427411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.403 [2024-11-08 07:46:20.427425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.427443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.403 [2024-11-08 07:46:20.427456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.427474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.403 [2024-11-08 07:46:20.427487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:50.403 [2024-11-08 07:46:20.427506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.403 [2024-11-08 07:46:20.427519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.427963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.427991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:20.428830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:20.428844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:50.404 10001.22 IOPS, 39.07 MiB/s [2024-11-08T07:47:08.365Z] 10056.30 IOPS, 39.28 MiB/s [2024-11-08T07:47:08.365Z] 10094.09 IOPS, 39.43 MiB/s [2024-11-08T07:47:08.365Z] 10130.25 IOPS, 39.57 MiB/s [2024-11-08T07:47:08.365Z] 10158.38 IOPS, 39.68 MiB/s [2024-11-08T07:47:08.365Z] 10185.36 IOPS, 39.79 MiB/s [2024-11-08T07:47:08.365Z] [2024-11-08 07:46:26.957785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:26.957839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.957900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:26.957915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.957934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:26.957948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.957988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:26.958012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.958030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:26.958043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.958062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:26.958074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.958092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:26.958106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.958124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.404 [2024-11-08 07:46:26.958137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.958155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.404 [2024-11-08 07:46:26.958168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.958186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.404 [2024-11-08 07:46:26.958199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.958217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.404 [2024-11-08 07:46:26.958230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.958248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.404 [2024-11-08 07:46:26.958261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.958278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.404 [2024-11-08 07:46:26.958291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.958309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.404 [2024-11-08 07:46:26.958322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.958340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.404 [2024-11-08 07:46:26.958353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.958396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.404 [2024-11-08 07:46:26.958410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:50.404 [2024-11-08 07:46:26.958440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.958454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.958486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.958518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.958549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.958580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.958620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.958652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.958683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.405 [2024-11-08 07:46:26.958714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.405 [2024-11-08 07:46:26.958745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.405 [2024-11-08 07:46:26.958777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.405 [2024-11-08 07:46:26.958814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.405 [2024-11-08 07:46:26.958846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.405 [2024-11-08 07:46:26.958877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.405 [2024-11-08 07:46:26.958908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.405 [2024-11-08 07:46:26.958940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.958961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.958975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.959001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.959015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.959033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.959046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.959065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.959078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.959096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.959109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.959127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.959141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.959159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.959172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.959190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.405 [2024-11-08 07:46:26.959208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.959227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.405 [2024-11-08 07:46:26.959240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.959258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.405 [2024-11-08 07:46:26.959271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.959289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.405 [2024-11-08 07:46:26.959303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.959321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.405 [2024-11-08 07:46:26.959334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:50.405 [2024-11-08 07:46:26.959353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.405 [2024-11-08 07:46:26.959365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.406 [2024-11-08 07:46:26.959397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.406 [2024-11-08 07:46:26.959428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.406 [2024-11-08 07:46:26.959459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.959965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.959985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.960276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.960314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.960349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.960384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.960419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.960453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.960487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.406 [2024-11-08 07:46:26.960522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.406 [2024-11-08 07:46:26.960557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.406 [2024-11-08 07:46:26.960592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.406 [2024-11-08 07:46:26.960626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.406 [2024-11-08 07:46:26.960661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.406 [2024-11-08 07:46:26.960702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.406 [2024-11-08 07:46:26.960736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.406 [2024-11-08 07:46:26.960771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.406 [2024-11-08 07:46:26.960806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:50.406 [2024-11-08 07:46:26.960827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.960840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.960862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.960875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.960897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.960910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.960932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.960945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.960966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.960989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.407 [2024-11-08 07:46:26.961414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.407 [2024-11-08 07:46:26.961449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.407 [2024-11-08 07:46:26.961483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.407 [2024-11-08 07:46:26.961518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.407 [2024-11-08 07:46:26.961557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.407 [2024-11-08 07:46:26.961592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.407 [2024-11-08 07:46:26.961626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.407 [2024-11-08 07:46:26.961661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.961968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.961991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.962017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.962031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.962052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.962065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:50.407 [2024-11-08 07:46:26.962087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.407 [2024-11-08 07:46:26.962100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:26.962121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.408 [2024-11-08 07:46:26.962134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:26.962156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.408 [2024-11-08 07:46:26.962169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:26.962190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.408 [2024-11-08 07:46:26.962204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:26.962225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.408 [2024-11-08 07:46:26.962238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:26.962261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:26.962274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:26.962296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:26.962309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:26.962330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:26.962343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:26.962364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:26.962377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:26.962399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:26.962412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:26.962437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:26.962451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:26.962472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:26.962486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:26.962508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:26.962521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:50.408 9954.33 IOPS, 38.88 MiB/s [2024-11-08T07:47:08.369Z] 9541.88 IOPS, 37.27 MiB/s [2024-11-08T07:47:08.369Z] 9545.29 IOPS, 37.29 MiB/s [2024-11-08T07:47:08.369Z] 9548.33 IOPS, 37.30 MiB/s [2024-11-08T07:47:08.369Z] 9553.16 IOPS, 37.32 MiB/s [2024-11-08T07:47:08.369Z] 9560.30 IOPS, 37.34 MiB/s [2024-11-08T07:47:08.369Z] 9566.76 IOPS, 37.37 MiB/s [2024-11-08T07:47:08.369Z] [2024-11-08 07:46:33.941469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:33.941516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.941569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:33.941584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.941602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:33.941615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.941634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:33.941648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.941666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:33.941679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.941697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:33.941710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.941728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:33.941742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.941760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.408 [2024-11-08 07:46:33.941773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.941791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.408 [2024-11-08 07:46:33.941827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.941846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.408 [2024-11-08 07:46:33.941859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.941878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.408 [2024-11-08 07:46:33.941891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.941910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.408 [2024-11-08 07:46:33.941923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.941941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.408 [2024-11-08 07:46:33.941954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.941973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.408 [2024-11-08 07:46:33.941986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.942017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.408 [2024-11-08 07:46:33.942031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.942049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.408 [2024-11-08 07:46:33.942062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:50.408 [2024-11-08 07:46:33.942081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.942379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.942410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.942441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.942472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.942503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.942534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.942565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.942603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.942886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.942921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.942953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.942971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.942984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.943011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.943025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.943044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.943057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.943076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.943094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.943113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.943126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.943145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.409 [2024-11-08 07:46:33.943158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.943177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.943190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.943209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.409 [2024-11-08 07:46:33.943223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:50.409 [2024-11-08 07:46:33.943242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.410 [2024-11-08 07:46:33.943255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.410 [2024-11-08 07:46:33.943287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.410 [2024-11-08 07:46:33.943319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.410 [2024-11-08 07:46:33.943350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.410 [2024-11-08 07:46:33.943382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.410 [2024-11-08 07:46:33.943414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.943672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.943719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.943754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.943789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.943824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.943859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.943893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.943928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.943963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.943996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.410 [2024-11-08 07:46:33.944507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.410 [2024-11-08 07:46:33.944543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:50.410 [2024-11-08 07:46:33.944564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.944578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.944604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.944618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.944639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.944653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.944674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.944687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.944709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.944722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.944743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.944757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.944779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.944792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.944822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.944836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.944857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.944871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.944892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.944906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.944927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.944941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.944962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.944984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.945020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.945062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.945097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.945136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.945171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.945206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.945240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.945275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.945310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.945345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.411 [2024-11-08 07:46:33.945381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.945418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.945453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.945492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.945527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.945562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.945597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.945633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.945667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.945702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.945737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.411 [2024-11-08 07:46:33.945772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:50.411 [2024-11-08 07:46:33.945794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.412 [2024-11-08 07:46:33.945807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:33.945828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.412 [2024-11-08 07:46:33.945842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:33.945863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.412 [2024-11-08 07:46:33.945877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:33.945898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.412 [2024-11-08 07:46:33.945912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:33.945938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.412 [2024-11-08 07:46:33.945952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:33.945997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:33.946012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:33.946034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:33.946047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:33.946069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:33.946082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:33.946103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:33.946117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:33.946138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:33.946151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:33.946173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:33.946186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:33.946207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:33.946222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:33.946244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:33.946258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:50.412 9403.55 IOPS, 36.73 MiB/s [2024-11-08T07:47:08.373Z] 8994.70 IOPS, 35.14 MiB/s [2024-11-08T07:47:08.373Z] 8619.92 IOPS, 33.67 MiB/s [2024-11-08T07:47:08.373Z] 8275.12 IOPS, 32.32 MiB/s [2024-11-08T07:47:08.373Z] 7956.85 IOPS, 31.08 MiB/s [2024-11-08T07:47:08.373Z] 7662.15 IOPS, 29.93 MiB/s [2024-11-08T07:47:08.373Z] 7388.50 IOPS, 28.86 MiB/s [2024-11-08T07:47:08.373Z] 7250.38 IOPS, 28.32 MiB/s [2024-11-08T07:47:08.373Z] 7335.37 IOPS, 28.65 MiB/s [2024-11-08T07:47:08.373Z] 7414.35 IOPS, 28.96 MiB/s [2024-11-08T07:47:08.373Z] 7490.66 IOPS, 29.26 MiB/s [2024-11-08T07:47:08.373Z] 7559.67 IOPS, 29.53 MiB/s [2024-11-08T07:47:08.373Z] 7625.32 IOPS, 29.79 MiB/s [2024-11-08T07:47:08.373Z] [2024-11-08 07:46:47.246821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:47.246875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:47.246920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:47.246954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:47.246973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:47.246999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:47.247017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:47.247031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:47.247049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:47.247062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:47.247080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:47.247094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:47.247111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:47.247124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:47.247142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:47.247156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:47.247173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:47.247186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:47.247204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.412 [2024-11-08 07:46:47.247217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:50.412 [2024-11-08 07:46:47.247236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.247249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.247280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.247311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.247341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.247381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.247413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.413 [2024-11-08 07:46:47.247929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.247973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.247987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.248010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.248022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.248036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.248048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.248062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.248074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.248089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.248102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.248116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.248129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.248143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.248161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.248175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.248188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.248202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.248215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.248228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.248241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.248255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.248268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.248282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.248295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.248308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.248321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.413 [2024-11-08 07:46:47.248335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.413 [2024-11-08 07:46:47.248348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.414 [2024-11-08 07:46:47.248374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.414 [2024-11-08 07:46:47.248403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.414 [2024-11-08 07:46:47.248823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.414 [2024-11-08 07:46:47.248853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.414 [2024-11-08 07:46:47.248880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.414 [2024-11-08 07:46:47.248906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.414 [2024-11-08 07:46:47.248932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.414 [2024-11-08 07:46:47.248958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.248971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.414 [2024-11-08 07:46:47.248991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.249005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.414 [2024-11-08 07:46:47.249017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.249030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.414 [2024-11-08 07:46:47.249042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.249056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.414 [2024-11-08 07:46:47.249068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.249082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.414 [2024-11-08 07:46:47.249094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.249108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.414 [2024-11-08 07:46:47.249121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.414 [2024-11-08 07:46:47.249134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.414 [2024-11-08 07:46:47.249146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.415 [2024-11-08 07:46:47.249175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.415 [2024-11-08 07:46:47.249201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.415 [2024-11-08 07:46:47.249227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.415 [2024-11-08 07:46:47.249253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.415 [2024-11-08 07:46:47.249737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.415 [2024-11-08 07:46:47.249764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.415 [2024-11-08 07:46:47.249790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.415 [2024-11-08 07:46:47.249818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.415 [2024-11-08 07:46:47.249846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.415 [2024-11-08 07:46:47.249877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.415 [2024-11-08 07:46:47.249904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:50.415 [2024-11-08 07:46:47.249931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.249974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.249987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.415 [2024-11-08 07:46:47.250008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.415 [2024-11-08 07:46:47.250021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.416 [2024-11-08 07:46:47.250048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.416 [2024-11-08 07:46:47.250076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.416 [2024-11-08 07:46:47.250102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.416 [2024-11-08 07:46:47.250131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.416 [2024-11-08 07:46:47.250158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.416 [2024-11-08 07:46:47.250185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.416 [2024-11-08 07:46:47.250212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.416 [2024-11-08 07:46:47.250247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.416 [2024-11-08 07:46:47.250274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.416 [2024-11-08 07:46:47.250303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.416 [2024-11-08 07:46:47.250329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.416 [2024-11-08 07:46:47.250356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74e290 is same with the state(6) to be set 00:19:50.416 [2024-11-08 07:46:47.250385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.416 [2024-11-08 07:46:47.250403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.416 [2024-11-08 07:46:47.250413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50400 len:8 PRP1 0x0 PRP2 0x0 00:19:50.416 [2024-11-08 07:46:47.250427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.416 [2024-11-08 07:46:47.250449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.416 [2024-11-08 07:46:47.250459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50856 len:8 PRP1 0x0 PRP2 0x0 00:19:50.416 [2024-11-08 07:46:47.250471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.416 [2024-11-08 07:46:47.250494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.416 [2024-11-08 07:46:47.250504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50864 len:8 PRP1 0x0 PRP2 0x0 00:19:50.416 [2024-11-08 07:46:47.250517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.416 [2024-11-08 07:46:47.250538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.416 [2024-11-08 07:46:47.250550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50872 len:8 PRP1 0x0 PRP2 0x0 00:19:50.416 [2024-11-08 07:46:47.250562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.416 [2024-11-08 07:46:47.250584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.416 [2024-11-08 07:46:47.250605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50880 len:8 PRP1 0x0 PRP2 0x0 00:19:50.416 [2024-11-08 07:46:47.250618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.416 [2024-11-08 07:46:47.250640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.416 [2024-11-08 07:46:47.250650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50888 len:8 PRP1 0x0 PRP2 0x0 00:19:50.416 [2024-11-08 07:46:47.250662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.416 [2024-11-08 07:46:47.250684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.416 [2024-11-08 07:46:47.250693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50896 len:8 PRP1 0x0 PRP2 0x0 00:19:50.416 [2024-11-08 07:46:47.250706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.416 [2024-11-08 07:46:47.250728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.416 [2024-11-08 07:46:47.250738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50904 len:8 PRP1 0x0 PRP2 0x0 00:19:50.416 [2024-11-08 07:46:47.250751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.250766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:50.416 [2024-11-08 07:46:47.250776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:50.416 [2024-11-08 07:46:47.250786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50912 len:8 PRP1 0x0 PRP2 0x0 00:19:50.416 [2024-11-08 07:46:47.250798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.251713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:50.416 [2024-11-08 07:46:47.251779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:50.416 [2024-11-08 07:46:47.251796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:50.416 [2024-11-08 07:46:47.251823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bae50 (9): Bad file descriptor 00:19:50.416 [2024-11-08 07:46:47.252176] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.416 [2024-11-08 07:46:47.252203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bae50 with addr=10.0.0.3, port=4421 00:19:50.417 [2024-11-08 07:46:47.252217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bae50 is same with the state(6) to be set 00:19:50.417 [2024-11-08 07:46:47.252251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bae50 (9): Bad file descriptor 00:19:50.417 [2024-11-08 07:46:47.252276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:50.417 [2024-11-08 07:46:47.252289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:50.417 [2024-11-08 07:46:47.252303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:50.417 [2024-11-08 07:46:47.252317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:50.417 [2024-11-08 07:46:47.252340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:50.417 7685.94 IOPS, 30.02 MiB/s [2024-11-08T07:47:08.378Z] 7757.11 IOPS, 30.30 MiB/s [2024-11-08T07:47:08.378Z] 7819.68 IOPS, 30.55 MiB/s [2024-11-08T07:47:08.378Z] 7883.79 IOPS, 30.80 MiB/s [2024-11-08T07:47:08.378Z] 7943.38 IOPS, 31.03 MiB/s [2024-11-08T07:47:08.378Z] 8000.80 IOPS, 31.25 MiB/s [2024-11-08T07:47:08.378Z] 8054.24 IOPS, 31.46 MiB/s [2024-11-08T07:47:08.378Z] 8106.67 IOPS, 31.67 MiB/s [2024-11-08T07:47:08.378Z] 8153.30 IOPS, 31.85 MiB/s [2024-11-08T07:47:08.378Z] 8200.73 IOPS, 32.03 MiB/s [2024-11-08T07:47:08.378Z] [2024-11-08 07:46:57.306801] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:50.417 8245.33 IOPS, 32.21 MiB/s [2024-11-08T07:47:08.378Z] 8280.83 IOPS, 32.35 MiB/s [2024-11-08T07:47:08.378Z] 8313.72 IOPS, 32.48 MiB/s [2024-11-08T07:47:08.378Z] 8343.46 IOPS, 32.59 MiB/s [2024-11-08T07:47:08.378Z] 8371.45 IOPS, 32.70 MiB/s [2024-11-08T07:47:08.378Z] 8396.60 IOPS, 32.80 MiB/s [2024-11-08T07:47:08.378Z] 8423.57 IOPS, 32.90 MiB/s [2024-11-08T07:47:08.378Z] 8450.27 IOPS, 33.01 MiB/s [2024-11-08T07:47:08.378Z] 8474.38 IOPS, 33.10 MiB/s [2024-11-08T07:47:08.378Z] 8498.41 IOPS, 33.20 MiB/s [2024-11-08T07:47:08.378Z] 8520.11 IOPS, 33.28 MiB/s [2024-11-08T07:47:08.378Z] Received shutdown signal, test time was about 55.026313 seconds 00:19:50.417 00:19:50.417 Latency(us) 00:19:50.417 [2024-11-08T07:47:08.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.417 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.417 Verification LBA range: start 0x0 length 0x4000 00:19:50.417 Nvme0n1 : 55.03 8520.35 33.28 0.00 0.00 15003.13 963.54 7030452.42 00:19:50.417 [2024-11-08T07:47:08.378Z] =================================================================================================================== 00:19:50.417 [2024-11-08T07:47:08.378Z] Total : 8520.35 33.28 0.00 0.00 15003.13 963.54 7030452.42 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:50.417 rmmod nvme_tcp 00:19:50.417 rmmod nvme_fabrics 00:19:50.417 rmmod nvme_keyring 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80466 ']' 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80466 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 80466 ']' 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 80466 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80466 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:50.417 killing process with pid 80466 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80466' 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 80466 00:19:50.417 07:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 80466 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.417 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.418 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.418 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:19:50.418 00:19:50.418 real 1m0.506s 00:19:50.418 user 2m42.123s 00:19:50.418 sys 0m22.938s 00:19:50.418 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:50.418 07:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:50.418 ************************************ 00:19:50.418 END TEST nvmf_host_multipath 00:19:50.418 ************************************ 00:19:50.418 07:47:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:50.418 07:47:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:50.418 07:47:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:50.418 07:47:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.678 ************************************ 00:19:50.678 START TEST nvmf_timeout 00:19:50.678 ************************************ 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:50.678 * Looking for test storage... 00:19:50.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:50.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.678 --rc genhtml_branch_coverage=1 00:19:50.678 --rc genhtml_function_coverage=1 00:19:50.678 --rc genhtml_legend=1 00:19:50.678 --rc geninfo_all_blocks=1 00:19:50.678 --rc geninfo_unexecuted_blocks=1 00:19:50.678 00:19:50.678 ' 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:50.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.678 --rc genhtml_branch_coverage=1 00:19:50.678 --rc genhtml_function_coverage=1 00:19:50.678 --rc genhtml_legend=1 00:19:50.678 --rc geninfo_all_blocks=1 00:19:50.678 --rc geninfo_unexecuted_blocks=1 00:19:50.678 00:19:50.678 ' 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:50.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.678 --rc genhtml_branch_coverage=1 00:19:50.678 --rc genhtml_function_coverage=1 00:19:50.678 --rc genhtml_legend=1 00:19:50.678 --rc geninfo_all_blocks=1 00:19:50.678 --rc geninfo_unexecuted_blocks=1 00:19:50.678 00:19:50.678 ' 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:50.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.678 --rc genhtml_branch_coverage=1 00:19:50.678 --rc genhtml_function_coverage=1 00:19:50.678 --rc genhtml_legend=1 00:19:50.678 --rc geninfo_all_blocks=1 00:19:50.678 --rc geninfo_unexecuted_blocks=1 00:19:50.678 00:19:50.678 ' 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:50.678 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:50.679 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:50.679 Cannot find device "nvmf_init_br" 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:50.679 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:50.938 Cannot find device "nvmf_init_br2" 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:50.938 Cannot find device "nvmf_tgt_br" 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:50.938 Cannot find device "nvmf_tgt_br2" 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:50.938 Cannot find device "nvmf_init_br" 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:50.938 Cannot find device "nvmf_init_br2" 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:50.938 Cannot find device "nvmf_tgt_br" 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:50.938 Cannot find device "nvmf_tgt_br2" 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:50.938 Cannot find device "nvmf_br" 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:50.938 Cannot find device "nvmf_init_if" 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:50.938 Cannot find device "nvmf_init_if2" 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:50.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:50.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:50.938 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:51.198 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:51.198 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:51.198 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:51.198 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:51.198 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:51.198 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:51.198 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:51.198 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:51.198 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:51.198 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:51.198 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:51.198 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:51.198 07:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:51.198 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:51.198 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:19:51.198 00:19:51.198 --- 10.0.0.3 ping statistics --- 00:19:51.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.198 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:51.198 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:51.198 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:19:51.198 00:19:51.198 --- 10.0.0.4 ping statistics --- 00:19:51.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.198 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:51.198 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:51.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:19:51.198 00:19:51.198 --- 10.0.0.1 ping statistics --- 00:19:51.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.199 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:51.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:19:51.199 00:19:51.199 --- 10.0.0.2 ping statistics --- 00:19:51.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.199 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81675 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81675 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81675 ']' 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:51.199 07:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:51.458 [2024-11-08 07:47:09.192113] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:19:51.458 [2024-11-08 07:47:09.192203] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.458 [2024-11-08 07:47:09.353018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:51.718 [2024-11-08 07:47:09.423502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.718 [2024-11-08 07:47:09.423592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.718 [2024-11-08 07:47:09.423611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.718 [2024-11-08 07:47:09.423626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.718 [2024-11-08 07:47:09.423639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.718 [2024-11-08 07:47:09.425022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.718 [2024-11-08 07:47:09.425029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.718 [2024-11-08 07:47:09.492960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:52.287 07:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:52.287 07:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:19:52.287 07:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.287 07:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.287 07:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:52.546 07:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.546 07:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:52.546 07:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:52.546 [2024-11-08 07:47:10.479707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.546 07:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:53.115 Malloc0 00:19:53.115 07:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:53.115 07:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:53.375 07:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:53.375 [2024-11-08 07:47:11.308473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:53.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.375 07:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81720 00:19:53.375 07:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:53.375 07:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81720 /var/tmp/bdevperf.sock 00:19:53.375 07:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81720 ']' 00:19:53.375 07:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.375 07:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.375 07:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.375 07:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.375 07:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:53.636 [2024-11-08 07:47:11.378063] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:19:53.636 [2024-11-08 07:47:11.378384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81720 ] 00:19:53.636 [2024-11-08 07:47:11.529990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.636 [2024-11-08 07:47:11.574024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.926 [2024-11-08 07:47:11.615144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:53.926 07:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:53.926 07:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:19:53.926 07:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:54.207 07:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:54.467 NVMe0n1 00:19:54.467 07:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81736 00:19:54.467 07:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:54.467 07:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:54.467 Running I/O for 10 seconds... 00:19:55.406 07:47:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:55.669 8432.00 IOPS, 32.94 MiB/s [2024-11-08T07:47:13.630Z] [2024-11-08 07:47:13.519675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.520507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.520577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.520622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.520664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.520706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.520747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.520793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.520833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.669 [2024-11-08 07:47:13.521789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.521829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.521957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.522974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.523887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.524962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.525015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.525062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.525102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.525226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.525291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.525333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.525342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.670 [2024-11-08 07:47:13.525374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.670 [2024-11-08 07:47:13.525386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.670 [2024-11-08 07:47:13.525396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.670 [2024-11-08 07:47:13.525406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.670 [2024-11-08 07:47:13.525415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.670 [2024-11-08 07:47:13.525425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.670 [2024-11-08 07:47:13.525434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.670 [2024-11-08 07:47:13.525442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ade50 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.525864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.525914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.525959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.526125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.526197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.526234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.526268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.526307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.526347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.670 [2024-11-08 07:47:13.526387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.526428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.526468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.526512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.526673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.526744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.526787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.526828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.526870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.526911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.526953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.527958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.528040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.528082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.528130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.528170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.528211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1625b30 is same with the state(6) to be set 00:19:55.671 [2024-11-08 07:47:13.528374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.671 [2024-11-08 07:47:13.528781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.671 [2024-11-08 07:47:13.528791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.528799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.528809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.528821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.528831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.528839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.528849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.528858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.528867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.528877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.528887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.528895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.528905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.528914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.528923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.528932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.528942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.528950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.528959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.528968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.528984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.528993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.672 [2024-11-08 07:47:13.529471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.672 [2024-11-08 07:47:13.529479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.529973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.529994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.530003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.530012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.530020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.530030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.530039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.530049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.530058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.530068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.530077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.530086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.530095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.530105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.530116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.530126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.530135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.530145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.673 [2024-11-08 07:47:13.530154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.673 [2024-11-08 07:47:13.530163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.674 [2024-11-08 07:47:13.530734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.674 [2024-11-08 07:47:13.530744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.675 [2024-11-08 07:47:13.530754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.675 [2024-11-08 07:47:13.530764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.675 [2024-11-08 07:47:13.530774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.675 [2024-11-08 07:47:13.530784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.675 [2024-11-08 07:47:13.530792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.675 [2024-11-08 07:47:13.530802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171b280 is same with the state(6) to be set 00:19:55.675 [2024-11-08 07:47:13.530813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:55.675 [2024-11-08 07:47:13.530819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:55.675 [2024-11-08 07:47:13.530827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79904 len:8 PRP1 0x0 PRP2 0x0 00:19:55.675 [2024-11-08 07:47:13.530836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.675 [2024-11-08 07:47:13.531459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:55.675 [2024-11-08 07:47:13.531531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ade50 (9): Bad file descriptor 00:19:55.675 [2024-11-08 07:47:13.532007] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.675 [2024-11-08 07:47:13.532056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ade50 with addr=10.0.0.3, port=4420 00:19:55.675 [2024-11-08 07:47:13.532104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ade50 is same with the state(6) to be set 00:19:55.675 [2024-11-08 07:47:13.532216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ade50 (9): Bad file descriptor 00:19:55.675 [2024-11-08 07:47:13.532271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:55.675 [2024-11-08 07:47:13.532317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:55.675 [2024-11-08 07:47:13.532415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:55.675 [2024-11-08 07:47:13.532444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:55.675 [2024-11-08 07:47:13.532535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:55.675 07:47:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:57.550 4930.50 IOPS, 19.26 MiB/s [2024-11-08T07:47:15.771Z] 3287.00 IOPS, 12.84 MiB/s [2024-11-08T07:47:15.771Z] [2024-11-08 07:47:15.532740] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.810 [2024-11-08 07:47:15.532881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ade50 with addr=10.0.0.3, port=4420 00:19:57.810 [2024-11-08 07:47:15.533039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ade50 is same with the state(6) to be set 00:19:57.810 [2024-11-08 07:47:15.533098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ade50 (9): Bad file descriptor 00:19:57.810 [2024-11-08 07:47:15.533150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:57.810 [2024-11-08 07:47:15.533255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:57.810 [2024-11-08 07:47:15.533303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:57.810 [2024-11-08 07:47:15.533333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:57.810 [2024-11-08 07:47:15.533380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:57.810 07:47:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:57.810 07:47:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:57.810 07:47:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:58.069 07:47:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:58.069 07:47:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:58.069 07:47:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:58.069 07:47:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:58.328 07:47:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:58.328 07:47:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:59.524 2465.25 IOPS, 9.63 MiB/s [2024-11-08T07:47:17.744Z] 1972.20 IOPS, 7.70 MiB/s [2024-11-08T07:47:17.744Z] [2024-11-08 07:47:17.533657] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:59.783 [2024-11-08 07:47:17.533693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ade50 with addr=10.0.0.3, port=4420 00:19:59.783 [2024-11-08 07:47:17.533706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ade50 is same with the state(6) to be set 00:19:59.783 [2024-11-08 07:47:17.533724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ade50 (9): Bad file descriptor 00:19:59.783 [2024-11-08 07:47:17.533740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:59.783 [2024-11-08 07:47:17.533750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:59.783 [2024-11-08 07:47:17.533760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:59.783 [2024-11-08 07:47:17.533770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:59.783 [2024-11-08 07:47:17.533780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:01.655 1643.50 IOPS, 6.42 MiB/s [2024-11-08T07:47:19.616Z] 1408.71 IOPS, 5.50 MiB/s [2024-11-08T07:47:19.616Z] [2024-11-08 07:47:19.533827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:01.655 [2024-11-08 07:47:19.533854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:01.655 [2024-11-08 07:47:19.533864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:01.655 [2024-11-08 07:47:19.533872] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:20:01.655 [2024-11-08 07:47:19.533882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:02.591 1232.62 IOPS, 4.81 MiB/s 00:20:02.591 Latency(us) 00:20:02.591 [2024-11-08T07:47:20.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.591 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:02.591 Verification LBA range: start 0x0 length 0x4000 00:20:02.591 NVMe0n1 : 8.18 1205.16 4.71 15.64 0.00 104726.26 2496.61 7030452.42 00:20:02.591 [2024-11-08T07:47:20.552Z] =================================================================================================================== 00:20:02.591 [2024-11-08T07:47:20.552Z] Total : 1205.16 4.71 15.64 0.00 104726.26 2496.61 7030452.42 00:20:02.591 { 00:20:02.591 "results": [ 00:20:02.591 { 00:20:02.591 "job": "NVMe0n1", 00:20:02.591 "core_mask": "0x4", 00:20:02.591 "workload": "verify", 00:20:02.591 "status": "finished", 00:20:02.591 "verify_range": { 00:20:02.591 "start": 0, 00:20:02.591 "length": 16384 00:20:02.591 }, 00:20:02.591 "queue_depth": 128, 00:20:02.591 "io_size": 4096, 00:20:02.591 "runtime": 8.182293, 00:20:02.591 "iops": 1205.1633936843866, 00:20:02.591 "mibps": 4.707669506579635, 00:20:02.591 "io_failed": 128, 00:20:02.591 "io_timeout": 0, 00:20:02.591 "avg_latency_us": 104726.26181828581, 00:20:02.591 "min_latency_us": 2496.609523809524, 00:20:02.591 "max_latency_us": 7030452.419047619 00:20:02.591 } 00:20:02.591 ], 00:20:02.591 "core_count": 1 00:20:02.591 } 00:20:03.159 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:20:03.159 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:03.159 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:03.418 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:20:03.419 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:20:03.419 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:03.419 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:03.678 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:03.678 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81736 00:20:03.678 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81720 00:20:03.678 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81720 ']' 00:20:03.678 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81720 00:20:03.678 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:20:03.678 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:03.678 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81720 00:20:03.678 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:03.678 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:03.678 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81720' 00:20:03.678 killing process with pid 81720 00:20:03.678 Received shutdown signal, test time was about 9.251548 seconds 00:20:03.678 00:20:03.678 Latency(us) 00:20:03.678 [2024-11-08T07:47:21.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.678 [2024-11-08T07:47:21.639Z] =================================================================================================================== 00:20:03.678 [2024-11-08T07:47:21.639Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.678 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81720 00:20:03.678 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81720 00:20:03.936 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:04.195 [2024-11-08 07:47:21.934219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:04.195 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81853 00:20:04.195 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:04.195 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81853 /var/tmp/bdevperf.sock 00:20:04.195 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81853 ']' 00:20:04.195 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.195 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:04.195 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.195 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:04.195 07:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:04.195 [2024-11-08 07:47:22.009563] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:20:04.195 [2024-11-08 07:47:22.009667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81853 ] 00:20:04.454 [2024-11-08 07:47:22.160201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.454 [2024-11-08 07:47:22.225162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.454 [2024-11-08 07:47:22.302954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:05.021 07:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:05.021 07:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:20:05.021 07:47:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:05.280 07:47:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:05.538 NVMe0n1 00:20:05.538 07:47:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81875 00:20:05.538 07:47:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:05.538 07:47:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:05.538 Running I/O for 10 seconds... 00:20:06.474 07:47:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:06.735 8823.00 IOPS, 34.46 MiB/s [2024-11-08T07:47:24.696Z] [2024-11-08 07:47:24.565533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.735 [2024-11-08 07:47:24.565989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.735 [2024-11-08 07:47:24.565998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.736 [2024-11-08 07:47:24.566786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.736 [2024-11-08 07:47:24.566796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.566805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.566815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.566824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.566834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.566842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.566852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.566861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.566870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.566879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.566889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.566898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.566908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.566916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.566926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.566934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.566944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.566952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.566962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.566971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.566982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.566991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.567017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.567036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.567055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.567209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.567228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.567252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.567271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.567289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.737 [2024-11-08 07:47:24.567308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.737 [2024-11-08 07:47:24.567562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.737 [2024-11-08 07:47:24.567573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.738 [2024-11-08 07:47:24.567788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:06.738 [2024-11-08 07:47:24.567946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.567981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.567998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.568007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.568017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.568025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.568034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.568042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.568052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.568060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.568071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.738 [2024-11-08 07:47:24.568079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.568089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c4280 is same with the state(6) to be set 00:20:06.738 [2024-11-08 07:47:24.568099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:06.738 [2024-11-08 07:47:24.568107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:06.738 [2024-11-08 07:47:24.568114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:20:06.738 [2024-11-08 07:47:24.568122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.738 [2024-11-08 07:47:24.568381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:06.738 [2024-11-08 07:47:24.568451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1656e50 (9): Bad file descriptor 00:20:06.738 [2024-11-08 07:47:24.568539] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.738 [2024-11-08 07:47:24.568553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1656e50 with addr=10.0.0.3, port=4420 00:20:06.738 [2024-11-08 07:47:24.568563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656e50 is same with the state(6) to be set 00:20:06.738 [2024-11-08 07:47:24.568576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1656e50 (9): Bad file descriptor 00:20:06.738 [2024-11-08 07:47:24.568598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:06.738 [2024-11-08 07:47:24.568607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:06.738 [2024-11-08 07:47:24.568618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:06.738 [2024-11-08 07:47:24.568627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:06.738 [2024-11-08 07:47:24.568638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:06.738 07:47:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:07.675 4866.50 IOPS, 19.01 MiB/s [2024-11-08T07:47:25.636Z] [2024-11-08 07:47:25.568776] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.675 [2024-11-08 07:47:25.568834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1656e50 with addr=10.0.0.3, port=4420 00:20:07.675 [2024-11-08 07:47:25.568855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656e50 is same with the state(6) to be set 00:20:07.675 [2024-11-08 07:47:25.568886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1656e50 (9): Bad file descriptor 00:20:07.675 [2024-11-08 07:47:25.568913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:07.675 [2024-11-08 07:47:25.568928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:07.675 [2024-11-08 07:47:25.568945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:07.675 [2024-11-08 07:47:25.568962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:07.675 [2024-11-08 07:47:25.568996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:07.675 07:47:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:07.934 [2024-11-08 07:47:25.830107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:07.934 07:47:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81875 00:20:08.870 3244.33 IOPS, 12.67 MiB/s [2024-11-08T07:47:26.831Z] [2024-11-08 07:47:26.581923] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:10.748 2433.25 IOPS, 9.50 MiB/s [2024-11-08T07:47:29.646Z] 3858.20 IOPS, 15.07 MiB/s [2024-11-08T07:47:30.583Z] 5028.50 IOPS, 19.64 MiB/s [2024-11-08T07:47:31.545Z] 5864.43 IOPS, 22.91 MiB/s [2024-11-08T07:47:32.482Z] 6500.38 IOPS, 25.39 MiB/s [2024-11-08T07:47:33.861Z] 6999.22 IOPS, 27.34 MiB/s [2024-11-08T07:47:33.861Z] 7398.30 IOPS, 28.90 MiB/s 00:20:15.900 Latency(us) 00:20:15.900 [2024-11-08T07:47:33.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.900 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:15.900 Verification LBA range: start 0x0 length 0x4000 00:20:15.900 NVMe0n1 : 10.01 7404.37 28.92 0.00 0.00 17258.80 963.54 3019898.88 00:20:15.900 [2024-11-08T07:47:33.861Z] =================================================================================================================== 00:20:15.900 [2024-11-08T07:47:33.861Z] Total : 7404.37 28.92 0.00 0.00 17258.80 963.54 3019898.88 00:20:15.900 { 00:20:15.900 "results": [ 00:20:15.900 { 00:20:15.900 "job": "NVMe0n1", 00:20:15.900 "core_mask": "0x4", 00:20:15.900 "workload": "verify", 00:20:15.900 "status": "finished", 00:20:15.900 "verify_range": { 00:20:15.900 "start": 0, 00:20:15.900 "length": 16384 00:20:15.900 }, 00:20:15.900 "queue_depth": 128, 00:20:15.900 "io_size": 4096, 00:20:15.900 "runtime": 10.006391, 00:20:15.900 "iops": 7404.367868495245, 00:20:15.900 "mibps": 28.92331198630955, 00:20:15.900 "io_failed": 0, 00:20:15.900 "io_timeout": 0, 00:20:15.900 "avg_latency_us": 17258.798719875365, 00:20:15.900 "min_latency_us": 963.5352380952381, 00:20:15.900 "max_latency_us": 3019898.88 00:20:15.900 } 00:20:15.900 ], 00:20:15.900 "core_count": 1 00:20:15.900 } 00:20:15.900 07:47:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81981 00:20:15.900 07:47:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.900 07:47:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:15.900 Running I/O for 10 seconds... 00:20:16.841 07:47:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:16.841 8567.00 IOPS, 33.46 MiB/s [2024-11-08T07:47:34.802Z] [2024-11-08 07:47:34.726051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with [2024-11-08 07:47:34.726105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.841 [2024-11-08 07:47:34.726153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.841 [2024-11-08 07:47:34.726165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.841 [2024-11-08 07:47:34.726174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.841 [2024-11-08 07:47:34.726183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.841 [2024-11-08 07:47:34.726192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.841 [2024-11-08 07:47:34.726201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.841 [2024-11-08 07:47:34.726210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.841 [2024-11-08 07:47:34.726218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656e50 is same with the state(6) to be set 00:20:16.841 the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.727986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.841 [2024-11-08 07:47:34.728599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.728633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.728672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.728711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.728754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.728802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.728838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.728872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.728912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.728952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.729951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.730971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626250 is same with the state(6) to be set 00:20:16.842 [2024-11-08 07:47:34.731704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.842 [2024-11-08 07:47:34.731736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.842 [2024-11-08 07:47:34.731761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.842 [2024-11-08 07:47:34.731774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.842 [2024-11-08 07:47:34.731788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.731802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.731818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.731831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.731847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.731860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.731873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.731886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.731904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.731920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.731934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.731946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.731960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.731972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.843 [2024-11-08 07:47:34.732912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.843 [2024-11-08 07:47:34.732926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.732940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.732953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.732965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.732987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.733966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.733987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.734000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.734013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.734026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.734079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.734092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.734105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.734120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.844 [2024-11-08 07:47:34.734136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.844 [2024-11-08 07:47:34.734152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.845 [2024-11-08 07:47:34.734916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.845 [2024-11-08 07:47:34.734944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.734960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.845 [2024-11-08 07:47:34.734975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.735002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.845 [2024-11-08 07:47:34.735016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.735029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.845 [2024-11-08 07:47:34.735041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.735054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.845 [2024-11-08 07:47:34.735065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.845 [2024-11-08 07:47:34.735078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.845 [2024-11-08 07:47:34.735090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.846 [2024-11-08 07:47:34.735103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.846 [2024-11-08 07:47:34.735117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.846 [2024-11-08 07:47:34.735133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:16.846 [2024-11-08 07:47:34.735147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.846 [2024-11-08 07:47:34.735162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.846 [2024-11-08 07:47:34.735177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.846 [2024-11-08 07:47:34.735193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.846 [2024-11-08 07:47:34.735205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.846 [2024-11-08 07:47:34.735218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.846 [2024-11-08 07:47:34.735230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.846 [2024-11-08 07:47:34.735246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.846 [2024-11-08 07:47:34.735261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.846 [2024-11-08 07:47:34.735275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.846 [2024-11-08 07:47:34.735287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.846 [2024-11-08 07:47:34.735302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.846 [2024-11-08 07:47:34.735316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.846 [2024-11-08 07:47:34.735331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.846 [2024-11-08 07:47:34.735343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.846 [2024-11-08 07:47:34.735356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.846 [2024-11-08 07:47:34.735371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.846 [2024-11-08 07:47:34.735387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.846 [2024-11-08 07:47:34.735399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.846 [2024-11-08 07:47:34.735411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c5350 is same with the state(6) to be set 00:20:16.846 [2024-11-08 07:47:34.735425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:16.846 [2024-11-08 07:47:34.735434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:16.846 [2024-11-08 07:47:34.735445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76912 len:8 PRP1 0x0 PRP2 0x0 00:20:16.846 [2024-11-08 07:47:34.735456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.846 [2024-11-08 07:47:34.735712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:16.846 [2024-11-08 07:47:34.735746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1656e50 (9): Bad file descriptor 00:20:16.846 [2024-11-08 07:47:34.735836] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:16.846 [2024-11-08 07:47:34.735856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1656e50 with addr=10.0.0.3, port=4420 00:20:16.846 [2024-11-08 07:47:34.735872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656e50 is same with the state(6) to be set 00:20:16.846 [2024-11-08 07:47:34.735890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1656e50 (9): Bad file descriptor 00:20:16.846 [2024-11-08 07:47:34.735906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:16.846 [2024-11-08 07:47:34.735922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:16.846 [2024-11-08 07:47:34.735935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:16.846 [2024-11-08 07:47:34.735947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:16.846 [2024-11-08 07:47:34.735960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:16.846 07:47:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:17.783 4743.50 IOPS, 18.53 MiB/s [2024-11-08T07:47:35.744Z] [2024-11-08 07:47:35.736080] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:17.784 [2024-11-08 07:47:35.736120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1656e50 with addr=10.0.0.3, port=4420 00:20:17.784 [2024-11-08 07:47:35.736133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656e50 is same with the state(6) to be set 00:20:17.784 [2024-11-08 07:47:35.736151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1656e50 (9): Bad file descriptor 00:20:17.784 [2024-11-08 07:47:35.736166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:17.784 [2024-11-08 07:47:35.736175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:17.784 [2024-11-08 07:47:35.736187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:17.784 [2024-11-08 07:47:35.736197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:17.784 [2024-11-08 07:47:35.736207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:18.990 3162.33 IOPS, 12.35 MiB/s [2024-11-08T07:47:36.951Z] [2024-11-08 07:47:36.736311] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:18.990 [2024-11-08 07:47:36.736348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1656e50 with addr=10.0.0.3, port=4420 00:20:18.990 [2024-11-08 07:47:36.736360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656e50 is same with the state(6) to be set 00:20:18.990 [2024-11-08 07:47:36.736393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1656e50 (9): Bad file descriptor 00:20:18.990 [2024-11-08 07:47:36.736409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:18.990 [2024-11-08 07:47:36.736418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:18.990 [2024-11-08 07:47:36.736428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:18.990 [2024-11-08 07:47:36.736438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:18.990 [2024-11-08 07:47:36.736449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:19.962 2371.75 IOPS, 9.26 MiB/s [2024-11-08T07:47:37.923Z] [2024-11-08 07:47:37.738923] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.962 [2024-11-08 07:47:37.738967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1656e50 with addr=10.0.0.3, port=4420 00:20:19.962 [2024-11-08 07:47:37.738993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1656e50 is same with the state(6) to be set 00:20:19.962 [2024-11-08 07:47:37.739194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1656e50 (9): Bad file descriptor 00:20:19.962 [2024-11-08 07:47:37.739376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:19.962 [2024-11-08 07:47:37.739387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:19.962 [2024-11-08 07:47:37.739397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:19.962 [2024-11-08 07:47:37.739407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:19.962 [2024-11-08 07:47:37.739419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:19.962 07:47:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:20.221 [2024-11-08 07:47:37.973593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:20.221 07:47:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81981 00:20:21.048 1897.40 IOPS, 7.41 MiB/s [2024-11-08T07:47:39.009Z] [2024-11-08 07:47:38.766765] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:20:22.920 3093.50 IOPS, 12.08 MiB/s [2024-11-08T07:47:41.818Z] 4171.00 IOPS, 16.29 MiB/s [2024-11-08T07:47:42.755Z] 4998.00 IOPS, 19.52 MiB/s [2024-11-08T07:47:43.692Z] 5638.78 IOPS, 22.03 MiB/s [2024-11-08T07:47:43.692Z] 6150.10 IOPS, 24.02 MiB/s 00:20:25.731 Latency(us) 00:20:25.731 [2024-11-08T07:47:43.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.731 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:25.731 Verification LBA range: start 0x0 length 0x4000 00:20:25.731 NVMe0n1 : 10.01 6157.34 24.05 5209.49 0.00 11236.57 674.86 3035877.18 00:20:25.731 [2024-11-08T07:47:43.692Z] =================================================================================================================== 00:20:25.731 [2024-11-08T07:47:43.692Z] Total : 6157.34 24.05 5209.49 0.00 11236.57 0.00 3035877.18 00:20:25.731 { 00:20:25.731 "results": [ 00:20:25.731 { 00:20:25.731 "job": "NVMe0n1", 00:20:25.731 "core_mask": "0x4", 00:20:25.731 "workload": "verify", 00:20:25.731 "status": "finished", 00:20:25.731 "verify_range": { 00:20:25.731 "start": 0, 00:20:25.731 "length": 16384 00:20:25.731 }, 00:20:25.731 "queue_depth": 128, 00:20:25.731 "io_size": 4096, 00:20:25.731 "runtime": 10.009036, 00:20:25.731 "iops": 6157.336230981685, 00:20:25.731 "mibps": 24.052094652272206, 00:20:25.731 "io_failed": 52142, 00:20:25.731 "io_timeout": 0, 00:20:25.731 "avg_latency_us": 11236.565868597363, 00:20:25.731 "min_latency_us": 674.8647619047618, 00:20:25.731 "max_latency_us": 3035877.180952381 00:20:25.731 } 00:20:25.731 ], 00:20:25.731 "core_count": 1 00:20:25.731 } 00:20:25.731 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81853 00:20:25.731 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81853 ']' 00:20:25.731 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81853 00:20:25.731 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:20:25.731 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:25.731 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81853 00:20:25.990 killing process with pid 81853 00:20:25.990 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.990 00:20:25.990 Latency(us) 00:20:25.990 [2024-11-08T07:47:43.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.990 [2024-11-08T07:47:43.951Z] =================================================================================================================== 00:20:25.990 [2024-11-08T07:47:43.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.990 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:25.990 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:25.990 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81853' 00:20:25.990 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81853 00:20:25.991 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81853 00:20:25.991 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82095 00:20:25.991 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:25.991 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82095 /var/tmp/bdevperf.sock 00:20:25.991 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82095 ']' 00:20:25.991 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.991 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:25.991 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.991 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:25.991 07:47:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:25.991 [2024-11-08 07:47:43.936072] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:20:25.991 [2024-11-08 07:47:43.936478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82095 ] 00:20:26.250 [2024-11-08 07:47:44.098514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.250 [2024-11-08 07:47:44.146209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.250 [2024-11-08 07:47:44.188529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:27.186 07:47:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:27.186 07:47:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:20:27.186 07:47:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82095 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:27.186 07:47:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82111 00:20:27.186 07:47:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:27.186 07:47:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:27.446 NVMe0n1 00:20:27.446 07:47:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82157 00:20:27.446 07:47:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:27.446 07:47:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:27.704 Running I/O for 10 seconds... 00:20:28.641 07:47:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:28.904 18288.00 IOPS, 71.44 MiB/s [2024-11-08T07:47:46.865Z] [2024-11-08 07:47:46.609488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.904 [2024-11-08 07:47:46.609708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.609851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with [2024-11-08 07:47:46.609879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.905 [2024-11-08 07:47:46.609914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.905 [2024-11-08 07:47:46.609933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.905 [2024-11-08 07:47:46.609948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.905 [2024-11-08 07:47:46.609965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.905 [2024-11-08 07:47:46.609976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.905 [2024-11-08 07:47:46.610006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.905 [2024-11-08 07:47:46.610019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.905 [2024-11-08 07:47:46.610032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb55e50 is same with the state(6) to be set 00:20:28.905 the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.610225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.610327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.610480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.610582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.610674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.610789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.610909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.905 [2024-11-08 07:47:46.611672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.611990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1629aa0 is same with the state(6) to be set 00:20:28.906 [2024-11-08 07:47:46.612174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.906 [2024-11-08 07:47:46.612585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.906 [2024-11-08 07:47:46.612601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.612967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.612988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.907 [2024-11-08 07:47:46.613661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.907 [2024-11-08 07:47:46.613677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.613693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.613704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.613720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.613732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.613750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.613763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.613776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.613788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.613805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.613820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.613837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.613850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.613866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.613881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.613898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.613914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.613931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.613946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.613964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.613987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.908 [2024-11-08 07:47:46.614867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.908 [2024-11-08 07:47:46.614882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.614897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.614916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.614932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.614944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.614958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.614973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.909 [2024-11-08 07:47:46.615671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.615683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc3140 is same with the state(6) to be set 00:20:28.909 [2024-11-08 07:47:46.615697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:28.909 [2024-11-08 07:47:46.615707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:28.909 [2024-11-08 07:47:46.615718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1848 len:8 PRP1 0x0 PRP2 0x0 00:20:28.909 [2024-11-08 07:47:46.615730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.909 [2024-11-08 07:47:46.616014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:28.910 [2024-11-08 07:47:46.616045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb55e50 (9): Bad file descriptor 00:20:28.910 [2024-11-08 07:47:46.616159] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.910 [2024-11-08 07:47:46.616179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb55e50 with addr=10.0.0.3, port=4420 00:20:28.910 [2024-11-08 07:47:46.616192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb55e50 is same with the state(6) to be set 00:20:28.910 [2024-11-08 07:47:46.616210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb55e50 (9): Bad file descriptor 00:20:28.910 [2024-11-08 07:47:46.616226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:28.910 [2024-11-08 07:47:46.616237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:28.910 [2024-11-08 07:47:46.616249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:28.910 [2024-11-08 07:47:46.616261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:28.910 [2024-11-08 07:47:46.616274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:28.910 07:47:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82157 00:20:30.783 10034.00 IOPS, 39.20 MiB/s [2024-11-08T07:47:48.744Z] 6689.33 IOPS, 26.13 MiB/s [2024-11-08T07:47:48.744Z] [2024-11-08 07:47:48.616413] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.783 [2024-11-08 07:47:48.616456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb55e50 with addr=10.0.0.3, port=4420 00:20:30.783 [2024-11-08 07:47:48.616469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb55e50 is same with the state(6) to be set 00:20:30.783 [2024-11-08 07:47:48.616488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb55e50 (9): Bad file descriptor 00:20:30.783 [2024-11-08 07:47:48.616504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:30.783 [2024-11-08 07:47:48.616514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:30.783 [2024-11-08 07:47:48.616524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:30.783 [2024-11-08 07:47:48.616534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:30.783 [2024-11-08 07:47:48.616545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:32.656 5017.00 IOPS, 19.60 MiB/s [2024-11-08T07:47:50.876Z] 4013.60 IOPS, 15.68 MiB/s [2024-11-08T07:47:50.876Z] [2024-11-08 07:47:50.616740] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:32.915 [2024-11-08 07:47:50.616782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb55e50 with addr=10.0.0.3, port=4420 00:20:32.915 [2024-11-08 07:47:50.616795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb55e50 is same with the state(6) to be set 00:20:32.915 [2024-11-08 07:47:50.616814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb55e50 (9): Bad file descriptor 00:20:32.915 [2024-11-08 07:47:50.616832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:32.915 [2024-11-08 07:47:50.616842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:32.915 [2024-11-08 07:47:50.616853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:32.915 [2024-11-08 07:47:50.616862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:32.915 [2024-11-08 07:47:50.616873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:34.789 3344.67 IOPS, 13.07 MiB/s [2024-11-08T07:47:52.750Z] 2866.86 IOPS, 11.20 MiB/s [2024-11-08T07:47:52.750Z] [2024-11-08 07:47:52.617005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:34.789 [2024-11-08 07:47:52.617040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:34.789 [2024-11-08 07:47:52.617051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:34.789 [2024-11-08 07:47:52.617062] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:20:34.789 [2024-11-08 07:47:52.617073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:35.726 2508.50 IOPS, 9.80 MiB/s 00:20:35.726 Latency(us) 00:20:35.726 [2024-11-08T07:47:53.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.726 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:35.726 NVMe0n1 : 8.11 2474.74 9.67 15.78 0.00 51392.32 6772.05 7030452.42 00:20:35.726 [2024-11-08T07:47:53.687Z] =================================================================================================================== 00:20:35.726 [2024-11-08T07:47:53.687Z] Total : 2474.74 9.67 15.78 0.00 51392.32 6772.05 7030452.42 00:20:35.726 { 00:20:35.726 "results": [ 00:20:35.726 { 00:20:35.726 "job": "NVMe0n1", 00:20:35.726 "core_mask": "0x4", 00:20:35.726 "workload": "randread", 00:20:35.726 "status": "finished", 00:20:35.726 "queue_depth": 128, 00:20:35.726 "io_size": 4096, 00:20:35.726 "runtime": 8.109138, 00:20:35.726 "iops": 2474.7389919865714, 00:20:35.726 "mibps": 9.666949187447544, 00:20:35.726 "io_failed": 128, 00:20:35.726 "io_timeout": 0, 00:20:35.726 "avg_latency_us": 51392.32104726065, 00:20:35.726 "min_latency_us": 6772.053333333333, 00:20:35.726 "max_latency_us": 7030452.419047619 00:20:35.726 } 00:20:35.726 ], 00:20:35.726 "core_count": 1 00:20:35.726 } 00:20:35.726 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:35.726 Attaching 5 probes... 00:20:35.726 1252.386645: reset bdev controller NVMe0 00:20:35.726 1252.490068: reconnect bdev controller NVMe0 00:20:35.726 3252.723422: reconnect delay bdev controller NVMe0 00:20:35.726 3252.739997: reconnect bdev controller NVMe0 00:20:35.726 5253.033260: reconnect delay bdev controller NVMe0 00:20:35.726 5253.049845: reconnect bdev controller NVMe0 00:20:35.726 7253.360138: reconnect delay bdev controller NVMe0 00:20:35.726 7253.376400: reconnect bdev controller NVMe0 00:20:35.726 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:35.726 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:35.726 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82111 00:20:35.726 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:35.726 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82095 00:20:35.726 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82095 ']' 00:20:35.726 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82095 00:20:35.726 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:20:35.726 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:35.726 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82095 00:20:35.985 killing process with pid 82095 00:20:35.985 Received shutdown signal, test time was about 8.182389 seconds 00:20:35.985 00:20:35.985 Latency(us) 00:20:35.985 [2024-11-08T07:47:53.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.985 [2024-11-08T07:47:53.946Z] =================================================================================================================== 00:20:35.985 [2024-11-08T07:47:53.946Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.985 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:35.985 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:35.985 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82095' 00:20:35.985 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82095 00:20:35.985 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82095 00:20:35.985 07:47:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:36.245 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:36.245 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:36.245 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:36.245 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:20:36.245 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:36.245 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:20:36.245 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:36.245 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:36.245 rmmod nvme_tcp 00:20:36.245 rmmod nvme_fabrics 00:20:36.245 rmmod nvme_keyring 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81675 ']' 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81675 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81675 ']' 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81675 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81675 00:20:36.504 killing process with pid 81675 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81675' 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81675 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81675 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:36.504 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.763 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.023 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:20:37.023 ************************************ 00:20:37.023 END TEST nvmf_timeout 00:20:37.023 ************************************ 00:20:37.023 00:20:37.023 real 0m46.388s 00:20:37.023 user 2m13.092s 00:20:37.023 sys 0m7.113s 00:20:37.023 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:37.023 07:47:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:37.023 07:47:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:37.023 07:47:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:37.023 ************************************ 00:20:37.023 END TEST nvmf_host 00:20:37.023 ************************************ 00:20:37.023 00:20:37.023 real 4m59.080s 00:20:37.023 user 12m32.359s 00:20:37.023 sys 1m24.586s 00:20:37.023 07:47:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:37.023 07:47:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.023 07:47:54 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:20:37.023 07:47:54 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:20:37.023 ************************************ 00:20:37.023 END TEST nvmf_tcp 00:20:37.023 ************************************ 00:20:37.023 00:20:37.023 real 12m29.656s 00:20:37.023 user 29m2.126s 00:20:37.023 sys 3m46.889s 00:20:37.023 07:47:54 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:37.023 07:47:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:37.023 07:47:54 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:20:37.023 07:47:54 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:37.023 07:47:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:37.023 07:47:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:37.023 07:47:54 -- common/autotest_common.sh@10 -- # set +x 00:20:37.023 ************************************ 00:20:37.023 START TEST nvmf_dif 00:20:37.023 ************************************ 00:20:37.023 07:47:54 nvmf_dif -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:37.283 * Looking for test storage... 00:20:37.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:37.283 07:47:54 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:37.283 07:47:54 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:20:37.283 07:47:54 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:37.283 07:47:55 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:37.283 07:47:55 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:20:37.283 07:47:55 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:37.283 07:47:55 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:37.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.283 --rc genhtml_branch_coverage=1 00:20:37.283 --rc genhtml_function_coverage=1 00:20:37.283 --rc genhtml_legend=1 00:20:37.283 --rc geninfo_all_blocks=1 00:20:37.284 --rc geninfo_unexecuted_blocks=1 00:20:37.284 00:20:37.284 ' 00:20:37.284 07:47:55 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:37.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.284 --rc genhtml_branch_coverage=1 00:20:37.284 --rc genhtml_function_coverage=1 00:20:37.284 --rc genhtml_legend=1 00:20:37.284 --rc geninfo_all_blocks=1 00:20:37.284 --rc geninfo_unexecuted_blocks=1 00:20:37.284 00:20:37.284 ' 00:20:37.284 07:47:55 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:37.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.284 --rc genhtml_branch_coverage=1 00:20:37.284 --rc genhtml_function_coverage=1 00:20:37.284 --rc genhtml_legend=1 00:20:37.284 --rc geninfo_all_blocks=1 00:20:37.284 --rc geninfo_unexecuted_blocks=1 00:20:37.284 00:20:37.284 ' 00:20:37.284 07:47:55 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:37.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.284 --rc genhtml_branch_coverage=1 00:20:37.284 --rc genhtml_function_coverage=1 00:20:37.284 --rc genhtml_legend=1 00:20:37.284 --rc geninfo_all_blocks=1 00:20:37.284 --rc geninfo_unexecuted_blocks=1 00:20:37.284 00:20:37.284 ' 00:20:37.284 07:47:55 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:37.284 07:47:55 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:20:37.284 07:47:55 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.284 07:47:55 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.284 07:47:55 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.284 07:47:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.284 07:47:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.284 07:47:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.284 07:47:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:37.284 07:47:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:37.284 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:37.284 07:47:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:37.284 07:47:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:37.284 07:47:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:37.284 07:47:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:37.284 07:47:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.284 07:47:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:37.284 07:47:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:37.284 Cannot find device "nvmf_init_br" 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:37.284 Cannot find device "nvmf_init_br2" 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:37.284 Cannot find device "nvmf_tgt_br" 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@164 -- # true 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:37.284 Cannot find device "nvmf_tgt_br2" 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@165 -- # true 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:37.284 Cannot find device "nvmf_init_br" 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@166 -- # true 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:37.284 Cannot find device "nvmf_init_br2" 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@167 -- # true 00:20:37.284 07:47:55 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:37.544 Cannot find device "nvmf_tgt_br" 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@168 -- # true 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:37.544 Cannot find device "nvmf_tgt_br2" 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@169 -- # true 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:37.544 Cannot find device "nvmf_br" 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@170 -- # true 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:37.544 Cannot find device "nvmf_init_if" 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@171 -- # true 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:37.544 Cannot find device "nvmf_init_if2" 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@172 -- # true 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:37.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@173 -- # true 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:37.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@174 -- # true 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:37.544 07:47:55 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:37.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:37.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:20:37.804 00:20:37.804 --- 10.0.0.3 ping statistics --- 00:20:37.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.804 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:37.804 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:37.804 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:20:37.804 00:20:37.804 --- 10.0.0.4 ping statistics --- 00:20:37.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.804 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:37.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:37.804 00:20:37.804 --- 10.0.0.1 ping statistics --- 00:20:37.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.804 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:37.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:20:37.804 00:20:37.804 --- 10.0.0.2 ping statistics --- 00:20:37.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.804 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:37.804 07:47:55 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:38.064 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:38.323 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:38.323 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:38.323 07:47:56 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.323 07:47:56 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:38.323 07:47:56 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:38.323 07:47:56 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.323 07:47:56 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:38.323 07:47:56 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:38.323 07:47:56 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:38.323 07:47:56 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:38.323 07:47:56 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.323 07:47:56 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:38.323 07:47:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:38.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.323 07:47:56 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82649 00:20:38.323 07:47:56 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82649 00:20:38.323 07:47:56 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 82649 ']' 00:20:38.323 07:47:56 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.323 07:47:56 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:38.323 07:47:56 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:38.323 07:47:56 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.323 07:47:56 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:38.323 07:47:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:38.323 [2024-11-08 07:47:56.176177] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:20:38.323 [2024-11-08 07:47:56.176939] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.583 [2024-11-08 07:47:56.328567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.583 [2024-11-08 07:47:56.394259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.583 [2024-11-08 07:47:56.394328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.583 [2024-11-08 07:47:56.394344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.583 [2024-11-08 07:47:56.394357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.583 [2024-11-08 07:47:56.394368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.583 [2024-11-08 07:47:56.394786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.583 [2024-11-08 07:47:56.464421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:38.842 07:47:56 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:38.842 07:47:56 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:20:38.842 07:47:56 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.842 07:47:56 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:38.842 07:47:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:38.842 07:47:56 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.842 07:47:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:38.842 07:47:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:38.842 07:47:56 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.842 07:47:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:38.842 [2024-11-08 07:47:56.601898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.842 07:47:56 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.842 07:47:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:38.842 07:47:56 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:38.842 07:47:56 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:38.842 07:47:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:38.842 ************************************ 00:20:38.842 START TEST fio_dif_1_default 00:20:38.842 ************************************ 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:38.842 bdev_null0 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.842 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:38.843 [2024-11-08 07:47:56.649943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:38.843 { 00:20:38.843 "params": { 00:20:38.843 "name": "Nvme$subsystem", 00:20:38.843 "trtype": "$TEST_TRANSPORT", 00:20:38.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.843 "adrfam": "ipv4", 00:20:38.843 "trsvcid": "$NVMF_PORT", 00:20:38.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.843 "hdgst": ${hdgst:-false}, 00:20:38.843 "ddgst": ${ddgst:-false} 00:20:38.843 }, 00:20:38.843 "method": "bdev_nvme_attach_controller" 00:20:38.843 } 00:20:38.843 EOF 00:20:38.843 )") 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:38.843 "params": { 00:20:38.843 "name": "Nvme0", 00:20:38.843 "trtype": "tcp", 00:20:38.843 "traddr": "10.0.0.3", 00:20:38.843 "adrfam": "ipv4", 00:20:38.843 "trsvcid": "4420", 00:20:38.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:38.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:38.843 "hdgst": false, 00:20:38.843 "ddgst": false 00:20:38.843 }, 00:20:38.843 "method": "bdev_nvme_attach_controller" 00:20:38.843 }' 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:38.843 07:47:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:39.102 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:39.102 fio-3.35 00:20:39.102 Starting 1 thread 00:20:51.315 00:20:51.315 filename0: (groupid=0, jobs=1): err= 0: pid=82708: Fri Nov 8 07:48:07 2024 00:20:51.315 read: IOPS=12.1k, BW=47.2MiB/s (49.5MB/s)(472MiB/10001msec) 00:20:51.315 slat (usec): min=5, max=268, avg= 6.14, stdev= 2.37 00:20:51.315 clat (usec): min=277, max=4188, avg=314.20, stdev=33.45 00:20:51.315 lat (usec): min=282, max=4209, avg=320.34, stdev=33.95 00:20:51.315 clat percentiles (usec): 00:20:51.315 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 302], 00:20:51.315 | 30.00th=[ 306], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 314], 00:20:51.315 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 351], 00:20:51.315 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 537], 99.95th=[ 619], 00:20:51.315 | 99.99th=[ 1139] 00:20:51.315 bw ( KiB/s): min=43072, max=49888, per=100.00%, avg=48323.63, stdev=1598.32, samples=19 00:20:51.315 iops : min=10768, max=12472, avg=12080.79, stdev=399.55, samples=19 00:20:51.315 lat (usec) : 500=99.87%, 750=0.10%, 1000=0.01% 00:20:51.315 lat (msec) : 2=0.02%, 10=0.01% 00:20:51.315 cpu : usr=79.31%, sys=18.86%, ctx=597, majf=0, minf=9 00:20:51.315 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.315 issued rwts: total=120808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.315 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:51.315 00:20:51.315 Run status group 0 (all jobs): 00:20:51.315 READ: bw=47.2MiB/s (49.5MB/s), 47.2MiB/s-47.2MiB/s (49.5MB/s-49.5MB/s), io=472MiB (495MB), run=10001-10001msec 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:51.315 ************************************ 00:20:51.315 END TEST fio_dif_1_default 00:20:51.315 ************************************ 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.315 00:20:51.315 real 0m11.020s 00:20:51.315 user 0m8.569s 00:20:51.315 sys 0m2.201s 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:51.315 07:48:07 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:51.315 07:48:07 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:51.315 07:48:07 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:51.315 07:48:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:51.315 ************************************ 00:20:51.315 START TEST fio_dif_1_multi_subsystems 00:20:51.315 ************************************ 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.315 bdev_null0 00:20:51.315 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.316 [2024-11-08 07:48:07.732876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.316 bdev_null1 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.316 { 00:20:51.316 "params": { 00:20:51.316 "name": "Nvme$subsystem", 00:20:51.316 "trtype": "$TEST_TRANSPORT", 00:20:51.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.316 "adrfam": "ipv4", 00:20:51.316 "trsvcid": "$NVMF_PORT", 00:20:51.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.316 "hdgst": ${hdgst:-false}, 00:20:51.316 "ddgst": ${ddgst:-false} 00:20:51.316 }, 00:20:51.316 "method": "bdev_nvme_attach_controller" 00:20:51.316 } 00:20:51.316 EOF 00:20:51.316 )") 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:51.316 { 00:20:51.316 "params": { 00:20:51.316 "name": "Nvme$subsystem", 00:20:51.316 "trtype": "$TEST_TRANSPORT", 00:20:51.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:51.316 "adrfam": "ipv4", 00:20:51.316 "trsvcid": "$NVMF_PORT", 00:20:51.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:51.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:51.316 "hdgst": ${hdgst:-false}, 00:20:51.316 "ddgst": ${ddgst:-false} 00:20:51.316 }, 00:20:51.316 "method": "bdev_nvme_attach_controller" 00:20:51.316 } 00:20:51.316 EOF 00:20:51.316 )") 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:51.316 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:51.317 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:51.317 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:20:51.317 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:20:51.317 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:51.317 "params": { 00:20:51.317 "name": "Nvme0", 00:20:51.317 "trtype": "tcp", 00:20:51.317 "traddr": "10.0.0.3", 00:20:51.317 "adrfam": "ipv4", 00:20:51.317 "trsvcid": "4420", 00:20:51.317 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:51.317 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:51.317 "hdgst": false, 00:20:51.317 "ddgst": false 00:20:51.317 }, 00:20:51.317 "method": "bdev_nvme_attach_controller" 00:20:51.317 },{ 00:20:51.317 "params": { 00:20:51.317 "name": "Nvme1", 00:20:51.317 "trtype": "tcp", 00:20:51.317 "traddr": "10.0.0.3", 00:20:51.317 "adrfam": "ipv4", 00:20:51.317 "trsvcid": "4420", 00:20:51.317 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.317 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.317 "hdgst": false, 00:20:51.317 "ddgst": false 00:20:51.317 }, 00:20:51.317 "method": "bdev_nvme_attach_controller" 00:20:51.317 }' 00:20:51.317 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:51.317 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:51.317 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:51.317 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:51.317 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:51.317 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:51.317 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:51.317 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:51.317 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:51.317 07:48:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:51.317 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:51.317 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:51.317 fio-3.35 00:20:51.317 Starting 2 threads 00:21:01.372 00:21:01.372 filename0: (groupid=0, jobs=1): err= 0: pid=82868: Fri Nov 8 07:48:18 2024 00:21:01.372 read: IOPS=6359, BW=24.8MiB/s (26.0MB/s)(248MiB/10001msec) 00:21:01.372 slat (nsec): min=5405, max=84784, avg=10522.07, stdev=2711.09 00:21:01.372 clat (usec): min=126, max=1604, avg=601.00, stdev=31.12 00:21:01.372 lat (usec): min=132, max=1614, avg=611.52, stdev=31.42 00:21:01.372 clat percentiles (usec): 00:21:01.372 | 1.00th=[ 545], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 578], 00:21:01.372 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 603], 60.00th=[ 611], 00:21:01.372 | 70.00th=[ 611], 80.00th=[ 627], 90.00th=[ 635], 95.00th=[ 652], 00:21:01.372 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 848], 99.95th=[ 889], 00:21:01.372 | 99.99th=[ 1090] 00:21:01.372 bw ( KiB/s): min=25024, max=25664, per=50.04%, avg=25463.58, stdev=151.17, samples=19 00:21:01.372 iops : min= 6256, max= 6416, avg=6365.89, stdev=37.79, samples=19 00:21:01.372 lat (usec) : 250=0.01%, 500=0.03%, 750=99.76%, 1000=0.19% 00:21:01.372 lat (msec) : 2=0.02% 00:21:01.372 cpu : usr=87.39%, sys=11.55%, ctx=9, majf=0, minf=0 00:21:01.372 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.372 issued rwts: total=63601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.372 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:01.372 filename1: (groupid=0, jobs=1): err= 0: pid=82869: Fri Nov 8 07:48:18 2024 00:21:01.372 read: IOPS=6360, BW=24.8MiB/s (26.1MB/s)(249MiB/10001msec) 00:21:01.372 slat (nsec): min=5413, max=57902, avg=10472.85, stdev=2601.92 00:21:01.372 clat (usec): min=309, max=1590, avg=601.56, stdev=34.54 00:21:01.372 lat (usec): min=315, max=1602, avg=612.03, stdev=35.02 00:21:01.372 clat percentiles (usec): 00:21:01.372 | 1.00th=[ 529], 5.00th=[ 553], 10.00th=[ 562], 20.00th=[ 578], 00:21:01.372 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 603], 60.00th=[ 611], 00:21:01.372 | 70.00th=[ 619], 80.00th=[ 627], 90.00th=[ 644], 95.00th=[ 652], 00:21:01.372 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 873], 99.95th=[ 955], 00:21:01.372 | 99.99th=[ 1205] 00:21:01.372 bw ( KiB/s): min=25024, max=25664, per=50.06%, avg=25472.00, stdev=151.60, samples=19 00:21:01.372 iops : min= 6256, max= 6416, avg=6368.00, stdev=37.90, samples=19 00:21:01.372 lat (usec) : 500=0.09%, 750=99.71%, 1000=0.16% 00:21:01.372 lat (msec) : 2=0.04% 00:21:01.372 cpu : usr=87.42%, sys=11.49%, ctx=22, majf=0, minf=0 00:21:01.372 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.372 issued rwts: total=63616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.372 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:01.372 00:21:01.372 Run status group 0 (all jobs): 00:21:01.372 READ: bw=49.7MiB/s (52.1MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.1MB/s), io=497MiB (521MB), run=10001-10001msec 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:01.372 ************************************ 00:21:01.372 END TEST fio_dif_1_multi_subsystems 00:21:01.372 ************************************ 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.372 00:21:01.372 real 0m11.187s 00:21:01.372 user 0m18.276s 00:21:01.372 sys 0m2.637s 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:01.372 07:48:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:01.372 07:48:18 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:01.372 07:48:18 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:01.372 07:48:18 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:01.372 07:48:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:01.372 ************************************ 00:21:01.372 START TEST fio_dif_rand_params 00:21:01.372 ************************************ 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:01.372 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.373 bdev_null0 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.373 [2024-11-08 07:48:18.978427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.373 { 00:21:01.373 "params": { 00:21:01.373 "name": "Nvme$subsystem", 00:21:01.373 "trtype": "$TEST_TRANSPORT", 00:21:01.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.373 "adrfam": "ipv4", 00:21:01.373 "trsvcid": "$NVMF_PORT", 00:21:01.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.373 "hdgst": ${hdgst:-false}, 00:21:01.373 "ddgst": ${ddgst:-false} 00:21:01.373 }, 00:21:01.373 "method": "bdev_nvme_attach_controller" 00:21:01.373 } 00:21:01.373 EOF 00:21:01.373 )") 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:01.373 07:48:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:01.373 "params": { 00:21:01.373 "name": "Nvme0", 00:21:01.373 "trtype": "tcp", 00:21:01.373 "traddr": "10.0.0.3", 00:21:01.373 "adrfam": "ipv4", 00:21:01.373 "trsvcid": "4420", 00:21:01.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:01.373 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:01.373 "hdgst": false, 00:21:01.373 "ddgst": false 00:21:01.373 }, 00:21:01.373 "method": "bdev_nvme_attach_controller" 00:21:01.373 }' 00:21:01.373 07:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:01.373 07:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:01.373 07:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:01.373 07:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:01.373 07:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:21:01.373 07:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:01.373 07:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:01.373 07:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:01.373 07:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:01.373 07:48:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:01.373 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:01.373 ... 00:21:01.373 fio-3.35 00:21:01.373 Starting 3 threads 00:21:07.944 00:21:07.944 filename0: (groupid=0, jobs=1): err= 0: pid=83030: Fri Nov 8 07:48:24 2024 00:21:07.944 read: IOPS=327, BW=41.0MiB/s (43.0MB/s)(205MiB/5007msec) 00:21:07.944 slat (nsec): min=4663, max=64618, avg=13990.30, stdev=10793.74 00:21:07.944 clat (usec): min=3407, max=10209, avg=9122.30, stdev=320.44 00:21:07.944 lat (usec): min=3411, max=10224, avg=9136.29, stdev=322.56 00:21:07.944 clat percentiles (usec): 00:21:07.944 | 1.00th=[ 8848], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 8979], 00:21:07.944 | 30.00th=[ 9110], 40.00th=[ 9110], 50.00th=[ 9110], 60.00th=[ 9110], 00:21:07.944 | 70.00th=[ 9241], 80.00th=[ 9241], 90.00th=[ 9372], 95.00th=[ 9503], 00:21:07.944 | 99.00th=[ 9634], 99.50th=[ 9765], 99.90th=[10159], 99.95th=[10159], 00:21:07.944 | 99.99th=[10159] 00:21:07.944 bw ( KiB/s): min=40704, max=43008, per=33.40%, avg=41932.80, stdev=647.63, samples=10 00:21:07.944 iops : min= 318, max= 336, avg=327.60, stdev= 5.06, samples=10 00:21:07.944 lat (msec) : 4=0.18%, 10=99.63%, 20=0.18% 00:21:07.944 cpu : usr=89.77%, sys=9.71%, ctx=17, majf=0, minf=0 00:21:07.944 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:07.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.944 issued rwts: total=1641,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.944 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:07.944 filename0: (groupid=0, jobs=1): err= 0: pid=83031: Fri Nov 8 07:48:24 2024 00:21:07.944 read: IOPS=326, BW=40.9MiB/s (42.8MB/s)(204MiB/5002msec) 00:21:07.944 slat (nsec): min=3788, max=61107, avg=16045.02, stdev=10029.87 00:21:07.944 clat (usec): min=8794, max=9977, avg=9142.04, stdev=168.59 00:21:07.944 lat (usec): min=8801, max=9990, avg=9158.08, stdev=171.19 00:21:07.944 clat percentiles (usec): 00:21:07.944 | 1.00th=[ 8848], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 8979], 00:21:07.944 | 30.00th=[ 9110], 40.00th=[ 9110], 50.00th=[ 9110], 60.00th=[ 9110], 00:21:07.944 | 70.00th=[ 9241], 80.00th=[ 9241], 90.00th=[ 9372], 95.00th=[ 9503], 00:21:07.944 | 99.00th=[ 9765], 99.50th=[ 9765], 99.90th=[10028], 99.95th=[10028], 00:21:07.945 | 99.99th=[10028] 00:21:07.945 bw ( KiB/s): min=41472, max=42240, per=33.37%, avg=41898.67, stdev=404.77, samples=9 00:21:07.945 iops : min= 324, max= 330, avg=327.33, stdev= 3.16, samples=9 00:21:07.945 lat (msec) : 10=100.00% 00:21:07.945 cpu : usr=91.28%, sys=8.28%, ctx=5, majf=0, minf=0 00:21:07.945 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:07.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.945 issued rwts: total=1635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.945 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:07.945 filename0: (groupid=0, jobs=1): err= 0: pid=83032: Fri Nov 8 07:48:24 2024 00:21:07.945 read: IOPS=326, BW=40.9MiB/s (42.8MB/s)(204MiB/5002msec) 00:21:07.945 slat (usec): min=4, max=109, avg=15.53, stdev=10.34 00:21:07.945 clat (usec): min=8144, max=10235, avg=9143.78, stdev=177.09 00:21:07.945 lat (usec): min=8151, max=10246, avg=9159.31, stdev=179.99 00:21:07.945 clat percentiles (usec): 00:21:07.945 | 1.00th=[ 8848], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 8979], 00:21:07.945 | 30.00th=[ 9110], 40.00th=[ 9110], 50.00th=[ 9110], 60.00th=[ 9110], 00:21:07.945 | 70.00th=[ 9241], 80.00th=[ 9241], 90.00th=[ 9372], 95.00th=[ 9503], 00:21:07.945 | 99.00th=[ 9634], 99.50th=[ 9896], 99.90th=[10159], 99.95th=[10290], 00:21:07.945 | 99.99th=[10290] 00:21:07.945 bw ( KiB/s): min=41472, max=42240, per=33.37%, avg=41898.67, stdev=404.77, samples=9 00:21:07.945 iops : min= 324, max= 330, avg=327.33, stdev= 3.16, samples=9 00:21:07.945 lat (msec) : 10=99.82%, 20=0.18% 00:21:07.945 cpu : usr=90.36%, sys=8.98%, ctx=10, majf=0, minf=0 00:21:07.945 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:07.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.945 issued rwts: total=1635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.945 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:07.945 00:21:07.945 Run status group 0 (all jobs): 00:21:07.945 READ: bw=123MiB/s (129MB/s), 40.9MiB/s-41.0MiB/s (42.8MB/s-43.0MB/s), io=614MiB (644MB), run=5002-5007msec 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.945 bdev_null0 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.945 [2024-11-08 07:48:24.977698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.945 bdev_null1 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.945 07:48:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.945 bdev_null2 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:21:07.945 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.946 { 00:21:07.946 "params": { 00:21:07.946 "name": "Nvme$subsystem", 00:21:07.946 "trtype": "$TEST_TRANSPORT", 00:21:07.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.946 "adrfam": "ipv4", 00:21:07.946 "trsvcid": "$NVMF_PORT", 00:21:07.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.946 "hdgst": ${hdgst:-false}, 00:21:07.946 "ddgst": ${ddgst:-false} 00:21:07.946 }, 00:21:07.946 "method": "bdev_nvme_attach_controller" 00:21:07.946 } 00:21:07.946 EOF 00:21:07.946 )") 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.946 { 00:21:07.946 "params": { 00:21:07.946 "name": "Nvme$subsystem", 00:21:07.946 "trtype": "$TEST_TRANSPORT", 00:21:07.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.946 "adrfam": "ipv4", 00:21:07.946 "trsvcid": "$NVMF_PORT", 00:21:07.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.946 "hdgst": ${hdgst:-false}, 00:21:07.946 "ddgst": ${ddgst:-false} 00:21:07.946 }, 00:21:07.946 "method": "bdev_nvme_attach_controller" 00:21:07.946 } 00:21:07.946 EOF 00:21:07.946 )") 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.946 { 00:21:07.946 "params": { 00:21:07.946 "name": "Nvme$subsystem", 00:21:07.946 "trtype": "$TEST_TRANSPORT", 00:21:07.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.946 "adrfam": "ipv4", 00:21:07.946 "trsvcid": "$NVMF_PORT", 00:21:07.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.946 "hdgst": ${hdgst:-false}, 00:21:07.946 "ddgst": ${ddgst:-false} 00:21:07.946 }, 00:21:07.946 "method": "bdev_nvme_attach_controller" 00:21:07.946 } 00:21:07.946 EOF 00:21:07.946 )") 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:07.946 "params": { 00:21:07.946 "name": "Nvme0", 00:21:07.946 "trtype": "tcp", 00:21:07.946 "traddr": "10.0.0.3", 00:21:07.946 "adrfam": "ipv4", 00:21:07.946 "trsvcid": "4420", 00:21:07.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.946 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:07.946 "hdgst": false, 00:21:07.946 "ddgst": false 00:21:07.946 }, 00:21:07.946 "method": "bdev_nvme_attach_controller" 00:21:07.946 },{ 00:21:07.946 "params": { 00:21:07.946 "name": "Nvme1", 00:21:07.946 "trtype": "tcp", 00:21:07.946 "traddr": "10.0.0.3", 00:21:07.946 "adrfam": "ipv4", 00:21:07.946 "trsvcid": "4420", 00:21:07.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.946 "hdgst": false, 00:21:07.946 "ddgst": false 00:21:07.946 }, 00:21:07.946 "method": "bdev_nvme_attach_controller" 00:21:07.946 },{ 00:21:07.946 "params": { 00:21:07.946 "name": "Nvme2", 00:21:07.946 "trtype": "tcp", 00:21:07.946 "traddr": "10.0.0.3", 00:21:07.946 "adrfam": "ipv4", 00:21:07.946 "trsvcid": "4420", 00:21:07.946 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:07.946 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:07.946 "hdgst": false, 00:21:07.946 "ddgst": false 00:21:07.946 }, 00:21:07.946 "method": "bdev_nvme_attach_controller" 00:21:07.946 }' 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:07.946 07:48:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.946 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:07.946 ... 00:21:07.946 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:07.946 ... 00:21:07.946 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:07.946 ... 00:21:07.946 fio-3.35 00:21:07.946 Starting 24 threads 00:21:20.163 00:21:20.163 filename0: (groupid=0, jobs=1): err= 0: pid=83130: Fri Nov 8 07:48:36 2024 00:21:20.163 read: IOPS=287, BW=1151KiB/s (1178kB/s)(11.3MiB/10042msec) 00:21:20.163 slat (usec): min=5, max=8030, avg=33.87, stdev=255.11 00:21:20.163 clat (msec): min=9, max=117, avg=55.41, stdev=18.15 00:21:20.163 lat (msec): min=9, max=117, avg=55.45, stdev=18.15 00:21:20.163 clat percentiles (msec): 00:21:20.163 | 1.00th=[ 11], 5.00th=[ 28], 10.00th=[ 34], 20.00th=[ 40], 00:21:20.163 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 59], 00:21:20.163 | 70.00th=[ 63], 80.00th=[ 69], 90.00th=[ 82], 95.00th=[ 90], 00:21:20.163 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 106], 99.95th=[ 108], 00:21:20.163 | 99.99th=[ 117] 00:21:20.163 bw ( KiB/s): min= 816, max= 2178, per=4.22%, avg=1148.50, stdev=285.89, samples=20 00:21:20.163 iops : min= 204, max= 544, avg=287.10, stdev=71.38, samples=20 00:21:20.163 lat (msec) : 10=0.14%, 20=2.39%, 50=35.13%, 100=62.17%, 250=0.17% 00:21:20.163 cpu : usr=43.19%, sys=2.11%, ctx=1431, majf=0, minf=9 00:21:20.163 IO depths : 1=0.1%, 2=0.7%, 4=2.5%, 8=80.5%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:20.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.163 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.163 issued rwts: total=2889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.163 filename0: (groupid=0, jobs=1): err= 0: pid=83131: Fri Nov 8 07:48:36 2024 00:21:20.163 read: IOPS=278, BW=1113KiB/s (1140kB/s)(10.9MiB/10042msec) 00:21:20.163 slat (usec): min=2, max=8037, avg=31.27, stdev=271.67 00:21:20.163 clat (msec): min=10, max=112, avg=57.29, stdev=18.40 00:21:20.163 lat (msec): min=10, max=112, avg=57.32, stdev=18.39 00:21:20.163 clat percentiles (msec): 00:21:20.163 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 36], 20.00th=[ 41], 00:21:20.163 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 61], 00:21:20.163 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 85], 95.00th=[ 92], 00:21:20.163 | 99.00th=[ 99], 99.50th=[ 108], 99.90th=[ 109], 99.95th=[ 112], 00:21:20.163 | 99.99th=[ 112] 00:21:20.163 bw ( KiB/s): min= 840, max= 2015, per=4.08%, avg=1110.35, stdev=255.81, samples=20 00:21:20.163 iops : min= 210, max= 503, avg=277.55, stdev=63.81, samples=20 00:21:20.163 lat (msec) : 20=2.22%, 50=31.82%, 100=65.14%, 250=0.82% 00:21:20.163 cpu : usr=39.68%, sys=1.77%, ctx=1195, majf=0, minf=9 00:21:20.163 IO depths : 1=0.3%, 2=1.1%, 4=3.6%, 8=79.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:20.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.163 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.163 issued rwts: total=2794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.163 filename0: (groupid=0, jobs=1): err= 0: pid=83132: Fri Nov 8 07:48:36 2024 00:21:20.163 read: IOPS=277, BW=1110KiB/s (1137kB/s)(10.9MiB/10026msec) 00:21:20.163 slat (usec): min=4, max=8041, avg=23.96, stdev=215.18 00:21:20.163 clat (msec): min=19, max=107, avg=57.53, stdev=16.97 00:21:20.163 lat (msec): min=19, max=107, avg=57.56, stdev=16.97 00:21:20.163 clat percentiles (msec): 00:21:20.163 | 1.00th=[ 21], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 46], 00:21:20.163 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 61], 00:21:20.163 | 70.00th=[ 61], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 92], 00:21:20.163 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 106], 99.95th=[ 108], 00:21:20.163 | 99.99th=[ 108] 00:21:20.163 bw ( KiB/s): min= 840, max= 1648, per=4.07%, avg=1106.55, stdev=193.11, samples=20 00:21:20.163 iops : min= 210, max= 412, avg=276.60, stdev=48.28, samples=20 00:21:20.163 lat (msec) : 20=0.54%, 50=34.64%, 100=64.71%, 250=0.11% 00:21:20.163 cpu : usr=33.88%, sys=1.51%, ctx=970, majf=0, minf=9 00:21:20.163 IO depths : 1=0.3%, 2=0.8%, 4=2.4%, 8=80.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:20.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.163 complete : 0=0.0%, 4=88.2%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.163 issued rwts: total=2783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.163 filename0: (groupid=0, jobs=1): err= 0: pid=83133: Fri Nov 8 07:48:36 2024 00:21:20.163 read: IOPS=290, BW=1161KiB/s (1189kB/s)(11.4MiB/10026msec) 00:21:20.163 slat (usec): min=4, max=8043, avg=30.71, stdev=278.08 00:21:20.163 clat (msec): min=10, max=119, avg=54.96, stdev=18.47 00:21:20.163 lat (msec): min=10, max=119, avg=54.99, stdev=18.47 00:21:20.163 clat percentiles (msec): 00:21:20.163 | 1.00th=[ 21], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 39], 00:21:20.163 | 30.00th=[ 46], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 59], 00:21:20.163 | 70.00th=[ 62], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 89], 00:21:20.163 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 110], 99.95th=[ 112], 00:21:20.163 | 99.99th=[ 121] 00:21:20.163 bw ( KiB/s): min= 816, max= 2304, per=4.25%, avg=1157.75, stdev=314.50, samples=20 00:21:20.163 iops : min= 204, max= 576, avg=289.40, stdev=78.62, samples=20 00:21:20.163 lat (msec) : 20=0.96%, 50=38.78%, 100=60.05%, 250=0.21% 00:21:20.163 cpu : usr=39.77%, sys=1.61%, ctx=1131, majf=0, minf=9 00:21:20.163 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.3%, 16=16.5%, 32=0.0%, >=64=0.0% 00:21:20.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.163 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.163 issued rwts: total=2911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.163 filename0: (groupid=0, jobs=1): err= 0: pid=83134: Fri Nov 8 07:48:36 2024 00:21:20.163 read: IOPS=286, BW=1145KiB/s (1173kB/s)(11.2MiB/10039msec) 00:21:20.163 slat (usec): min=5, max=8043, avg=24.61, stdev=160.58 00:21:20.163 clat (msec): min=7, max=119, avg=55.76, stdev=19.29 00:21:20.163 lat (msec): min=7, max=119, avg=55.79, stdev=19.29 00:21:20.163 clat percentiles (msec): 00:21:20.163 | 1.00th=[ 15], 5.00th=[ 22], 10.00th=[ 28], 20.00th=[ 40], 00:21:20.163 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 61], 00:21:20.163 | 70.00th=[ 62], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 93], 00:21:20.163 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 121], 00:21:20.163 | 99.99th=[ 121] 00:21:20.163 bw ( KiB/s): min= 792, max= 2693, per=4.20%, avg=1142.65, stdev=390.16, samples=20 00:21:20.163 iops : min= 198, max= 673, avg=285.65, stdev=97.49, samples=20 00:21:20.163 lat (msec) : 10=0.17%, 20=4.11%, 50=34.79%, 100=60.82%, 250=0.10% 00:21:20.163 cpu : usr=35.53%, sys=1.45%, ctx=950, majf=0, minf=9 00:21:20.163 IO depths : 1=0.1%, 2=0.4%, 4=1.2%, 8=81.6%, 16=16.7%, 32=0.0%, >=64=0.0% 00:21:20.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.163 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.163 issued rwts: total=2874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.163 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.163 filename0: (groupid=0, jobs=1): err= 0: pid=83135: Fri Nov 8 07:48:36 2024 00:21:20.163 read: IOPS=269, BW=1080KiB/s (1105kB/s)(10.6MiB/10019msec) 00:21:20.163 slat (usec): min=4, max=4040, avg=24.10, stdev=109.93 00:21:20.163 clat (msec): min=19, max=134, avg=59.12, stdev=19.16 00:21:20.163 lat (msec): min=19, max=134, avg=59.14, stdev=19.16 00:21:20.163 clat percentiles (msec): 00:21:20.163 | 1.00th=[ 21], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 41], 00:21:20.163 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 61], 00:21:20.164 | 70.00th=[ 65], 80.00th=[ 73], 90.00th=[ 86], 95.00th=[ 96], 00:21:20.164 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 127], 99.95th=[ 134], 00:21:20.164 | 99.99th=[ 134] 00:21:20.164 bw ( KiB/s): min= 656, max= 1539, per=3.94%, avg=1071.74, stdev=230.59, samples=19 00:21:20.164 iops : min= 164, max= 384, avg=267.89, stdev=57.56, samples=19 00:21:20.164 lat (msec) : 20=0.67%, 50=33.54%, 100=61.91%, 250=3.88% 00:21:20.164 cpu : usr=41.07%, sys=1.95%, ctx=1387, majf=0, minf=9 00:21:20.164 IO depths : 1=0.3%, 2=1.8%, 4=6.4%, 8=76.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:20.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 complete : 0=0.0%, 4=89.0%, 8=9.6%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 issued rwts: total=2704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.164 filename0: (groupid=0, jobs=1): err= 0: pid=83136: Fri Nov 8 07:48:36 2024 00:21:20.164 read: IOPS=279, BW=1117KiB/s (1144kB/s)(10.9MiB/10023msec) 00:21:20.164 slat (usec): min=4, max=8036, avg=32.76, stdev=311.68 00:21:20.164 clat (msec): min=12, max=116, avg=57.11, stdev=18.07 00:21:20.164 lat (msec): min=12, max=116, avg=57.14, stdev=18.07 00:21:20.164 clat percentiles (msec): 00:21:20.164 | 1.00th=[ 24], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 40], 00:21:20.164 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 61], 00:21:20.164 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 93], 00:21:20.164 | 99.00th=[ 99], 99.50th=[ 108], 99.90th=[ 110], 99.95th=[ 116], 00:21:20.164 | 99.99th=[ 116] 00:21:20.164 bw ( KiB/s): min= 784, max= 1924, per=4.09%, avg=1112.95, stdev=248.78, samples=20 00:21:20.164 iops : min= 196, max= 481, avg=278.20, stdev=62.20, samples=20 00:21:20.164 lat (msec) : 20=0.07%, 50=34.32%, 100=64.93%, 250=0.68% 00:21:20.164 cpu : usr=41.26%, sys=1.98%, ctx=1137, majf=0, minf=9 00:21:20.164 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=79.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:20.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 issued rwts: total=2800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.164 filename0: (groupid=0, jobs=1): err= 0: pid=83137: Fri Nov 8 07:48:36 2024 00:21:20.164 read: IOPS=280, BW=1123KiB/s (1150kB/s)(11.0MiB/10032msec) 00:21:20.164 slat (usec): min=4, max=8033, avg=25.03, stdev=272.21 00:21:20.164 clat (msec): min=2, max=119, avg=56.78, stdev=20.69 00:21:20.164 lat (msec): min=2, max=119, avg=56.81, stdev=20.69 00:21:20.164 clat percentiles (msec): 00:21:20.164 | 1.00th=[ 3], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 46], 00:21:20.164 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 00:21:20.164 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 90], 00:21:20.164 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 112], 99.95th=[ 112], 00:21:20.164 | 99.99th=[ 121] 00:21:20.164 bw ( KiB/s): min= 793, max= 2954, per=4.12%, avg=1120.55, stdev=451.72, samples=20 00:21:20.164 iops : min= 198, max= 738, avg=280.00, stdev=112.85, samples=20 00:21:20.164 lat (msec) : 4=2.77%, 10=0.64%, 20=2.34%, 50=26.77%, 100=66.99% 00:21:20.164 lat (msec) : 250=0.50% 00:21:20.164 cpu : usr=35.15%, sys=1.54%, ctx=1012, majf=0, minf=9 00:21:20.164 IO depths : 1=0.3%, 2=1.8%, 4=6.3%, 8=75.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:20.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 complete : 0=0.0%, 4=89.6%, 8=9.1%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 issued rwts: total=2817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.164 filename1: (groupid=0, jobs=1): err= 0: pid=83138: Fri Nov 8 07:48:36 2024 00:21:20.164 read: IOPS=293, BW=1173KiB/s (1201kB/s)(11.5MiB/10023msec) 00:21:20.164 slat (usec): min=2, max=8022, avg=28.18, stdev=209.34 00:21:20.164 clat (msec): min=12, max=107, avg=54.39, stdev=18.20 00:21:20.164 lat (msec): min=12, max=107, avg=54.42, stdev=18.20 00:21:20.164 clat percentiles (msec): 00:21:20.164 | 1.00th=[ 15], 5.00th=[ 25], 10.00th=[ 34], 20.00th=[ 39], 00:21:20.164 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 59], 00:21:20.164 | 70.00th=[ 62], 80.00th=[ 69], 90.00th=[ 83], 95.00th=[ 89], 00:21:20.164 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 106], 99.95th=[ 107], 00:21:20.164 | 99.99th=[ 108] 00:21:20.164 bw ( KiB/s): min= 832, max= 2312, per=4.30%, avg=1171.30, stdev=305.78, samples=20 00:21:20.164 iops : min= 208, max= 578, avg=292.80, stdev=76.47, samples=20 00:21:20.164 lat (msec) : 20=2.18%, 50=39.03%, 100=58.32%, 250=0.48% 00:21:20.164 cpu : usr=41.54%, sys=1.91%, ctx=1277, majf=0, minf=9 00:21:20.164 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=82.0%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:20.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 issued rwts: total=2939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.164 filename1: (groupid=0, jobs=1): err= 0: pid=83139: Fri Nov 8 07:48:36 2024 00:21:20.164 read: IOPS=285, BW=1141KiB/s (1169kB/s)(11.2MiB/10039msec) 00:21:20.164 slat (usec): min=3, max=4034, avg=18.70, stdev=78.32 00:21:20.164 clat (msec): min=2, max=126, avg=55.93, stdev=22.22 00:21:20.164 lat (msec): min=2, max=126, avg=55.95, stdev=22.22 00:21:20.164 clat percentiles (msec): 00:21:20.164 | 1.00th=[ 3], 5.00th=[ 16], 10.00th=[ 23], 20.00th=[ 40], 00:21:20.164 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 61], 00:21:20.164 | 70.00th=[ 63], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 95], 00:21:20.164 | 99.00th=[ 103], 99.50th=[ 114], 99.90th=[ 120], 99.95th=[ 121], 00:21:20.164 | 99.99th=[ 127] 00:21:20.164 bw ( KiB/s): min= 785, max= 3301, per=4.18%, avg=1138.30, stdev=526.15, samples=20 00:21:20.164 iops : min= 196, max= 825, avg=284.45, stdev=131.51, samples=20 00:21:20.164 lat (msec) : 4=2.23%, 10=1.68%, 20=3.46%, 50=29.54%, 100=62.08% 00:21:20.164 lat (msec) : 250=1.01% 00:21:20.164 cpu : usr=37.15%, sys=1.81%, ctx=1397, majf=0, minf=0 00:21:20.164 IO depths : 1=0.2%, 2=1.5%, 4=5.2%, 8=76.7%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:20.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 complete : 0=0.0%, 4=89.4%, 8=9.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 issued rwts: total=2864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.164 filename1: (groupid=0, jobs=1): err= 0: pid=83140: Fri Nov 8 07:48:36 2024 00:21:20.164 read: IOPS=283, BW=1135KiB/s (1162kB/s)(11.1MiB/10042msec) 00:21:20.164 slat (usec): min=5, max=8038, avg=27.06, stdev=199.19 00:21:20.164 clat (msec): min=10, max=120, avg=56.24, stdev=20.24 00:21:20.164 lat (msec): min=10, max=120, avg=56.26, stdev=20.24 00:21:20.164 clat percentiles (msec): 00:21:20.164 | 1.00th=[ 11], 5.00th=[ 22], 10.00th=[ 27], 20.00th=[ 40], 00:21:20.164 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 61], 00:21:20.164 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 95], 00:21:20.164 | 99.00th=[ 100], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 121], 00:21:20.164 | 99.99th=[ 121] 00:21:20.164 bw ( KiB/s): min= 760, max= 2650, per=4.16%, avg=1132.10, stdev=391.84, samples=20 00:21:20.164 iops : min= 190, max= 662, avg=283.00, stdev=97.86, samples=20 00:21:20.164 lat (msec) : 20=4.07%, 50=32.05%, 100=63.04%, 250=0.84% 00:21:20.164 cpu : usr=39.13%, sys=1.71%, ctx=1150, majf=0, minf=9 00:21:20.164 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:20.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 complete : 0=0.0%, 4=88.6%, 8=10.6%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 issued rwts: total=2849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.164 filename1: (groupid=0, jobs=1): err= 0: pid=83141: Fri Nov 8 07:48:36 2024 00:21:20.164 read: IOPS=280, BW=1121KiB/s (1148kB/s)(11.0MiB/10039msec) 00:21:20.164 slat (nsec): min=4583, max=77954, avg=18701.82, stdev=12733.11 00:21:20.164 clat (msec): min=11, max=125, avg=56.95, stdev=18.87 00:21:20.164 lat (msec): min=11, max=125, avg=56.97, stdev=18.87 00:21:20.164 clat percentiles (msec): 00:21:20.164 | 1.00th=[ 17], 5.00th=[ 25], 10.00th=[ 34], 20.00th=[ 41], 00:21:20.164 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 61], 00:21:20.164 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 94], 00:21:20.164 | 99.00th=[ 97], 99.50th=[ 102], 99.90th=[ 111], 99.95th=[ 121], 00:21:20.164 | 99.99th=[ 126] 00:21:20.164 bw ( KiB/s): min= 760, max= 2304, per=4.11%, avg=1118.70, stdev=318.78, samples=20 00:21:20.164 iops : min= 190, max= 576, avg=279.65, stdev=79.69, samples=20 00:21:20.164 lat (msec) : 20=2.27%, 50=30.74%, 100=66.38%, 250=0.60% 00:21:20.164 cpu : usr=41.11%, sys=1.73%, ctx=1388, majf=0, minf=9 00:21:20.164 IO depths : 1=0.1%, 2=0.9%, 4=3.3%, 8=79.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:21:20.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 complete : 0=0.0%, 4=88.6%, 8=10.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 issued rwts: total=2814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.164 filename1: (groupid=0, jobs=1): err= 0: pid=83142: Fri Nov 8 07:48:36 2024 00:21:20.164 read: IOPS=287, BW=1149KiB/s (1177kB/s)(11.2MiB/10025msec) 00:21:20.164 slat (usec): min=4, max=8045, avg=39.12, stdev=401.93 00:21:20.164 clat (usec): min=10642, max=98240, avg=55511.31, stdev=17545.11 00:21:20.164 lat (usec): min=10649, max=98246, avg=55550.43, stdev=17544.38 00:21:20.164 clat percentiles (usec): 00:21:20.164 | 1.00th=[23462], 5.00th=[25560], 10.00th=[35390], 20.00th=[38011], 00:21:20.164 | 30.00th=[47973], 40.00th=[49021], 50.00th=[57410], 60.00th=[60031], 00:21:20.164 | 70.00th=[60031], 80.00th=[70779], 90.00th=[83362], 95.00th=[87557], 00:21:20.164 | 99.00th=[95945], 99.50th=[95945], 99.90th=[95945], 99.95th=[95945], 00:21:20.164 | 99.99th=[98042] 00:21:20.164 bw ( KiB/s): min= 840, max= 2047, per=4.21%, avg=1146.00, stdev=259.76, samples=20 00:21:20.164 iops : min= 210, max= 511, avg=286.45, stdev=64.81, samples=20 00:21:20.164 lat (msec) : 20=0.28%, 50=42.40%, 100=57.33% 00:21:20.164 cpu : usr=33.39%, sys=1.53%, ctx=926, majf=0, minf=9 00:21:20.164 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:20.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.164 issued rwts: total=2880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.164 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.164 filename1: (groupid=0, jobs=1): err= 0: pid=83143: Fri Nov 8 07:48:36 2024 00:21:20.165 read: IOPS=276, BW=1105KiB/s (1132kB/s)(10.8MiB/10034msec) 00:21:20.165 slat (usec): min=4, max=8038, avg=46.12, stdev=480.29 00:21:20.165 clat (msec): min=10, max=119, avg=57.71, stdev=18.36 00:21:20.165 lat (msec): min=10, max=119, avg=57.75, stdev=18.37 00:21:20.165 clat percentiles (msec): 00:21:20.165 | 1.00th=[ 24], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 45], 00:21:20.165 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 61], 00:21:20.165 | 70.00th=[ 62], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 95], 00:21:20.165 | 99.00th=[ 97], 99.50th=[ 120], 99.90th=[ 120], 99.95th=[ 120], 00:21:20.165 | 99.99th=[ 120] 00:21:20.165 bw ( KiB/s): min= 848, max= 2064, per=4.05%, avg=1102.70, stdev=267.25, samples=20 00:21:20.165 iops : min= 212, max= 516, avg=275.65, stdev=66.84, samples=20 00:21:20.165 lat (msec) : 20=0.29%, 50=34.87%, 100=63.97%, 250=0.87% 00:21:20.165 cpu : usr=32.94%, sys=1.68%, ctx=931, majf=0, minf=9 00:21:20.165 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:20.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.165 complete : 0=0.0%, 4=88.8%, 8=10.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.165 issued rwts: total=2773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.165 filename1: (groupid=0, jobs=1): err= 0: pid=83144: Fri Nov 8 07:48:36 2024 00:21:20.165 read: IOPS=276, BW=1105KiB/s (1132kB/s)(10.8MiB/10035msec) 00:21:20.165 slat (usec): min=4, max=8036, avg=39.47, stdev=395.02 00:21:20.165 clat (msec): min=9, max=120, avg=57.66, stdev=18.85 00:21:20.165 lat (msec): min=9, max=120, avg=57.70, stdev=18.85 00:21:20.165 clat percentiles (msec): 00:21:20.165 | 1.00th=[ 19], 5.00th=[ 24], 10.00th=[ 35], 20.00th=[ 45], 00:21:20.165 | 30.00th=[ 49], 40.00th=[ 55], 50.00th=[ 60], 60.00th=[ 61], 00:21:20.165 | 70.00th=[ 62], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 95], 00:21:20.165 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 120], 99.95th=[ 121], 00:21:20.165 | 99.99th=[ 121] 00:21:20.165 bw ( KiB/s): min= 848, max= 2336, per=4.05%, avg=1102.80, stdev=320.29, samples=20 00:21:20.165 iops : min= 212, max= 584, avg=275.70, stdev=80.07, samples=20 00:21:20.165 lat (msec) : 10=0.07%, 20=1.91%, 50=33.43%, 100=64.37%, 250=0.22% 00:21:20.165 cpu : usr=33.44%, sys=1.51%, ctx=919, majf=0, minf=9 00:21:20.165 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=76.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:20.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.165 complete : 0=0.0%, 4=89.2%, 8=9.5%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.165 issued rwts: total=2773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.165 filename1: (groupid=0, jobs=1): err= 0: pid=83145: Fri Nov 8 07:48:36 2024 00:21:20.165 read: IOPS=292, BW=1171KiB/s (1199kB/s)(11.5MiB/10024msec) 00:21:20.165 slat (usec): min=4, max=8044, avg=36.28, stdev=263.15 00:21:20.165 clat (msec): min=14, max=120, avg=54.46, stdev=18.40 00:21:20.165 lat (msec): min=14, max=120, avg=54.49, stdev=18.39 00:21:20.165 clat percentiles (msec): 00:21:20.165 | 1.00th=[ 20], 5.00th=[ 27], 10.00th=[ 33], 20.00th=[ 39], 00:21:20.165 | 30.00th=[ 43], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 58], 00:21:20.165 | 70.00th=[ 62], 80.00th=[ 69], 90.00th=[ 84], 95.00th=[ 92], 00:21:20.165 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 108], 99.95th=[ 111], 00:21:20.165 | 99.99th=[ 121] 00:21:20.165 bw ( KiB/s): min= 800, max= 2064, per=4.29%, avg=1167.20, stdev=269.77, samples=20 00:21:20.165 iops : min= 200, max= 516, avg=291.80, stdev=67.44, samples=20 00:21:20.165 lat (msec) : 20=1.64%, 50=41.14%, 100=56.92%, 250=0.31% 00:21:20.165 cpu : usr=41.71%, sys=1.73%, ctx=1361, majf=0, minf=9 00:21:20.165 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:20.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.165 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.165 issued rwts: total=2934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.165 filename2: (groupid=0, jobs=1): err= 0: pid=83146: Fri Nov 8 07:48:36 2024 00:21:20.165 read: IOPS=287, BW=1148KiB/s (1176kB/s)(11.2MiB/10009msec) 00:21:20.165 slat (usec): min=5, max=8045, avg=41.02, stdev=401.89 00:21:20.165 clat (msec): min=9, max=108, avg=55.55, stdev=18.12 00:21:20.165 lat (msec): min=9, max=108, avg=55.59, stdev=18.12 00:21:20.165 clat percentiles (msec): 00:21:20.165 | 1.00th=[ 22], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 39], 00:21:20.165 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 58], 60.00th=[ 60], 00:21:20.165 | 70.00th=[ 61], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 90], 00:21:20.165 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 102], 99.95th=[ 107], 00:21:20.165 | 99.99th=[ 109] 00:21:20.165 bw ( KiB/s): min= 848, max= 1923, per=4.21%, avg=1145.35, stdev=233.77, samples=20 00:21:20.165 iops : min= 212, max= 480, avg=286.30, stdev=58.31, samples=20 00:21:20.165 lat (msec) : 10=0.24%, 20=0.63%, 50=39.75%, 100=59.21%, 250=0.17% 00:21:20.165 cpu : usr=34.50%, sys=1.73%, ctx=897, majf=0, minf=10 00:21:20.165 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=79.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:20.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.165 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.165 issued rwts: total=2873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.165 filename2: (groupid=0, jobs=1): err= 0: pid=83147: Fri Nov 8 07:48:36 2024 00:21:20.165 read: IOPS=289, BW=1157KiB/s (1185kB/s)(11.3MiB/10009msec) 00:21:20.165 slat (usec): min=4, max=16043, avg=35.26, stdev=372.75 00:21:20.165 clat (msec): min=9, max=120, avg=55.13, stdev=18.14 00:21:20.165 lat (msec): min=9, max=120, avg=55.16, stdev=18.13 00:21:20.165 clat percentiles (msec): 00:21:20.165 | 1.00th=[ 21], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 40], 00:21:20.165 | 30.00th=[ 45], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 60], 00:21:20.165 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 84], 95.00th=[ 95], 00:21:20.165 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 109], 99.95th=[ 109], 00:21:20.165 | 99.99th=[ 121] 00:21:20.165 bw ( KiB/s): min= 816, max= 1667, per=4.24%, avg=1154.15, stdev=210.20, samples=20 00:21:20.165 iops : min= 204, max= 416, avg=288.50, stdev=52.45, samples=20 00:21:20.165 lat (msec) : 10=0.10%, 20=0.76%, 50=40.45%, 100=58.58%, 250=0.10% 00:21:20.165 cpu : usr=38.63%, sys=1.53%, ctx=1359, majf=0, minf=9 00:21:20.165 IO depths : 1=0.1%, 2=0.7%, 4=2.5%, 8=81.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:20.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.165 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.165 issued rwts: total=2895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.165 filename2: (groupid=0, jobs=1): err= 0: pid=83148: Fri Nov 8 07:48:36 2024 00:21:20.165 read: IOPS=289, BW=1159KiB/s (1187kB/s)(11.3MiB/10009msec) 00:21:20.165 slat (usec): min=5, max=8059, avg=40.85, stdev=400.86 00:21:20.165 clat (msec): min=9, max=108, avg=55.01, stdev=17.69 00:21:20.165 lat (msec): min=9, max=108, avg=55.05, stdev=17.69 00:21:20.165 clat percentiles (msec): 00:21:20.165 | 1.00th=[ 22], 5.00th=[ 27], 10.00th=[ 36], 20.00th=[ 37], 00:21:20.165 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 60], 00:21:20.165 | 70.00th=[ 61], 80.00th=[ 70], 90.00th=[ 83], 95.00th=[ 88], 00:21:20.165 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 108], 00:21:20.165 | 99.99th=[ 108] 00:21:20.165 bw ( KiB/s): min= 816, max= 1803, per=4.25%, avg=1156.15, stdev=219.65, samples=20 00:21:20.165 iops : min= 204, max= 450, avg=289.00, stdev=54.80, samples=20 00:21:20.165 lat (msec) : 10=0.10%, 20=0.69%, 50=41.45%, 100=57.59%, 250=0.17% 00:21:20.165 cpu : usr=33.66%, sys=1.55%, ctx=992, majf=0, minf=9 00:21:20.165 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:20.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.165 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.165 issued rwts: total=2900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.165 filename2: (groupid=0, jobs=1): err= 0: pid=83149: Fri Nov 8 07:48:36 2024 00:21:20.165 read: IOPS=286, BW=1147KiB/s (1175kB/s)(11.2MiB/10025msec) 00:21:20.165 slat (usec): min=5, max=9038, avg=34.88, stdev=277.14 00:21:20.165 clat (msec): min=13, max=120, avg=55.58, stdev=17.62 00:21:20.165 lat (msec): min=13, max=120, avg=55.61, stdev=17.62 00:21:20.165 clat percentiles (msec): 00:21:20.165 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 39], 00:21:20.165 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 59], 00:21:20.165 | 70.00th=[ 62], 80.00th=[ 69], 90.00th=[ 84], 95.00th=[ 90], 00:21:20.165 | 99.00th=[ 96], 99.50th=[ 101], 99.90th=[ 108], 99.95th=[ 108], 00:21:20.165 | 99.99th=[ 121] 00:21:20.165 bw ( KiB/s): min= 848, max= 2015, per=4.21%, avg=1145.25, stdev=254.85, samples=20 00:21:20.165 iops : min= 212, max= 503, avg=286.25, stdev=63.59, samples=20 00:21:20.165 lat (msec) : 20=0.24%, 50=36.70%, 100=62.71%, 250=0.35% 00:21:20.165 cpu : usr=40.48%, sys=1.50%, ctx=1349, majf=0, minf=9 00:21:20.165 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.6%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:20.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.165 complete : 0=0.0%, 4=88.0%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.165 issued rwts: total=2875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.165 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.165 filename2: (groupid=0, jobs=1): err= 0: pid=83150: Fri Nov 8 07:48:36 2024 00:21:20.165 read: IOPS=283, BW=1136KiB/s (1163kB/s)(11.1MiB/10045msec) 00:21:20.165 slat (usec): min=5, max=8039, avg=23.38, stdev=226.80 00:21:20.165 clat (msec): min=9, max=119, avg=56.22, stdev=19.03 00:21:20.165 lat (msec): min=9, max=119, avg=56.25, stdev=19.03 00:21:20.165 clat percentiles (msec): 00:21:20.165 | 1.00th=[ 12], 5.00th=[ 24], 10.00th=[ 34], 20.00th=[ 39], 00:21:20.165 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 61], 00:21:20.165 | 70.00th=[ 62], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 93], 00:21:20.165 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 116], 99.95th=[ 120], 00:21:20.165 | 99.99th=[ 121] 00:21:20.165 bw ( KiB/s): min= 792, max= 2531, per=4.16%, avg=1133.35, stdev=359.51, samples=20 00:21:20.165 iops : min= 198, max= 632, avg=283.30, stdev=89.72, samples=20 00:21:20.165 lat (msec) : 10=0.56%, 20=2.31%, 50=34.71%, 100=62.10%, 250=0.32% 00:21:20.165 cpu : usr=34.02%, sys=1.35%, ctx=1000, majf=0, minf=9 00:21:20.165 IO depths : 1=0.1%, 2=0.7%, 4=2.4%, 8=80.2%, 16=16.6%, 32=0.0%, >=64=0.0% 00:21:20.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.166 complete : 0=0.0%, 4=88.4%, 8=11.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.166 issued rwts: total=2852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.166 filename2: (groupid=0, jobs=1): err= 0: pid=83151: Fri Nov 8 07:48:36 2024 00:21:20.166 read: IOPS=291, BW=1167KiB/s (1195kB/s)(11.4MiB/10013msec) 00:21:20.166 slat (usec): min=2, max=8069, avg=37.98, stdev=386.85 00:21:20.166 clat (msec): min=14, max=102, avg=54.68, stdev=17.41 00:21:20.166 lat (msec): min=14, max=102, avg=54.71, stdev=17.41 00:21:20.166 clat percentiles (msec): 00:21:20.166 | 1.00th=[ 23], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 38], 00:21:20.166 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 57], 60.00th=[ 60], 00:21:20.166 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 82], 95.00th=[ 88], 00:21:20.166 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 100], 99.95th=[ 100], 00:21:20.166 | 99.99th=[ 104] 00:21:20.166 bw ( KiB/s): min= 848, max= 1744, per=4.27%, avg=1163.20, stdev=201.11, samples=20 00:21:20.166 iops : min= 212, max= 436, avg=290.80, stdev=50.28, samples=20 00:21:20.166 lat (msec) : 20=0.58%, 50=44.74%, 100=54.64%, 250=0.03% 00:21:20.166 cpu : usr=34.86%, sys=1.57%, ctx=989, majf=0, minf=9 00:21:20.166 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=82.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:20.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.166 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.166 issued rwts: total=2921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.166 filename2: (groupid=0, jobs=1): err= 0: pid=83152: Fri Nov 8 07:48:36 2024 00:21:20.166 read: IOPS=275, BW=1101KiB/s (1128kB/s)(10.8MiB/10010msec) 00:21:20.166 slat (usec): min=5, max=8050, avg=45.22, stdev=414.01 00:21:20.166 clat (msec): min=9, max=132, avg=57.89, stdev=19.51 00:21:20.166 lat (msec): min=9, max=132, avg=57.93, stdev=19.51 00:21:20.166 clat percentiles (msec): 00:21:20.166 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 40], 00:21:20.166 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 61], 00:21:20.166 | 70.00th=[ 62], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 96], 00:21:20.166 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 132], 00:21:20.166 | 99.99th=[ 133] 00:21:20.166 bw ( KiB/s): min= 640, max= 1664, per=4.04%, avg=1098.40, stdev=231.03, samples=20 00:21:20.166 iops : min= 160, max= 416, avg=274.60, stdev=57.76, samples=20 00:21:20.166 lat (msec) : 10=0.25%, 20=0.58%, 50=36.90%, 100=59.76%, 250=2.50% 00:21:20.166 cpu : usr=35.56%, sys=1.57%, ctx=1045, majf=0, minf=9 00:21:20.166 IO depths : 1=0.1%, 2=1.6%, 4=6.1%, 8=76.8%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:20.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.166 complete : 0=0.0%, 4=88.9%, 8=9.8%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.166 issued rwts: total=2756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.166 filename2: (groupid=0, jobs=1): err= 0: pid=83153: Fri Nov 8 07:48:36 2024 00:21:20.166 read: IOPS=284, BW=1140KiB/s (1167kB/s)(11.1MiB/10017msec) 00:21:20.166 slat (usec): min=4, max=8033, avg=33.48, stdev=263.73 00:21:20.166 clat (msec): min=18, max=120, avg=55.95, stdev=17.21 00:21:20.166 lat (msec): min=18, max=120, avg=55.99, stdev=17.21 00:21:20.166 clat percentiles (msec): 00:21:20.166 | 1.00th=[ 22], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 40], 00:21:20.166 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 60], 00:21:20.166 | 70.00th=[ 62], 80.00th=[ 70], 90.00th=[ 82], 95.00th=[ 89], 00:21:20.166 | 99.00th=[ 97], 99.50th=[ 99], 99.90th=[ 108], 99.95th=[ 109], 00:21:20.166 | 99.99th=[ 121] 00:21:20.166 bw ( KiB/s): min= 840, max= 1520, per=4.18%, avg=1137.60, stdev=182.73, samples=20 00:21:20.166 iops : min= 210, max= 380, avg=284.40, stdev=45.68, samples=20 00:21:20.166 lat (msec) : 20=0.56%, 50=39.52%, 100=59.81%, 250=0.11% 00:21:20.166 cpu : usr=38.75%, sys=1.56%, ctx=1242, majf=0, minf=9 00:21:20.166 IO depths : 1=0.2%, 2=1.1%, 4=3.7%, 8=79.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:20.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.166 complete : 0=0.0%, 4=87.9%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.166 issued rwts: total=2854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:20.166 00:21:20.166 Run status group 0 (all jobs): 00:21:20.166 READ: bw=26.6MiB/s (27.9MB/s), 1080KiB/s-1173KiB/s (1105kB/s-1201kB/s), io=267MiB (280MB), run=10009-10045msec 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.166 bdev_null0 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.166 [2024-11-08 07:48:36.541690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.166 bdev_null1 00:21:20.166 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.167 { 00:21:20.167 "params": { 00:21:20.167 "name": "Nvme$subsystem", 00:21:20.167 "trtype": "$TEST_TRANSPORT", 00:21:20.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.167 "adrfam": "ipv4", 00:21:20.167 "trsvcid": "$NVMF_PORT", 00:21:20.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.167 "hdgst": ${hdgst:-false}, 00:21:20.167 "ddgst": ${ddgst:-false} 00:21:20.167 }, 00:21:20.167 "method": "bdev_nvme_attach_controller" 00:21:20.167 } 00:21:20.167 EOF 00:21:20.167 )") 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:20.167 { 00:21:20.167 "params": { 00:21:20.167 "name": "Nvme$subsystem", 00:21:20.167 "trtype": "$TEST_TRANSPORT", 00:21:20.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.167 "adrfam": "ipv4", 00:21:20.167 "trsvcid": "$NVMF_PORT", 00:21:20.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.167 "hdgst": ${hdgst:-false}, 00:21:20.167 "ddgst": ${ddgst:-false} 00:21:20.167 }, 00:21:20.167 "method": "bdev_nvme_attach_controller" 00:21:20.167 } 00:21:20.167 EOF 00:21:20.167 )") 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:20.167 "params": { 00:21:20.167 "name": "Nvme0", 00:21:20.167 "trtype": "tcp", 00:21:20.167 "traddr": "10.0.0.3", 00:21:20.167 "adrfam": "ipv4", 00:21:20.167 "trsvcid": "4420", 00:21:20.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:20.167 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:20.167 "hdgst": false, 00:21:20.167 "ddgst": false 00:21:20.167 }, 00:21:20.167 "method": "bdev_nvme_attach_controller" 00:21:20.167 },{ 00:21:20.167 "params": { 00:21:20.167 "name": "Nvme1", 00:21:20.167 "trtype": "tcp", 00:21:20.167 "traddr": "10.0.0.3", 00:21:20.167 "adrfam": "ipv4", 00:21:20.167 "trsvcid": "4420", 00:21:20.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.167 "hdgst": false, 00:21:20.167 "ddgst": false 00:21:20.167 }, 00:21:20.167 "method": "bdev_nvme_attach_controller" 00:21:20.167 }' 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:20.167 07:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:20.167 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:20.167 ... 00:21:20.167 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:20.167 ... 00:21:20.167 fio-3.35 00:21:20.167 Starting 4 threads 00:21:25.443 00:21:25.443 filename0: (groupid=0, jobs=1): err= 0: pid=83300: Fri Nov 8 07:48:42 2024 00:21:25.443 read: IOPS=2458, BW=19.2MiB/s (20.1MB/s)(96.1MiB/5002msec) 00:21:25.443 slat (nsec): min=3203, max=85692, avg=17933.68, stdev=11207.38 00:21:25.443 clat (usec): min=325, max=13137, avg=3190.40, stdev=812.29 00:21:25.443 lat (usec): min=336, max=13147, avg=3208.34, stdev=813.98 00:21:25.443 clat percentiles (usec): 00:21:25.443 | 1.00th=[ 1090], 5.00th=[ 1680], 10.00th=[ 2008], 20.00th=[ 2737], 00:21:25.443 | 30.00th=[ 2933], 40.00th=[ 3130], 50.00th=[ 3326], 60.00th=[ 3458], 00:21:25.443 | 70.00th=[ 3556], 80.00th=[ 3720], 90.00th=[ 3916], 95.00th=[ 4113], 00:21:25.443 | 99.00th=[ 5604], 99.50th=[ 5997], 99.90th=[ 6915], 99.95th=[ 8356], 00:21:25.443 | 99.99th=[13173] 00:21:25.443 bw ( KiB/s): min=17776, max=23472, per=24.03%, avg=19698.56, stdev=1728.25, samples=9 00:21:25.443 iops : min= 2222, max= 2934, avg=2462.22, stdev=216.10, samples=9 00:21:25.443 lat (usec) : 500=0.09%, 750=0.06%, 1000=0.23% 00:21:25.443 lat (msec) : 2=9.30%, 4=82.98%, 10=7.31%, 20=0.04% 00:21:25.443 cpu : usr=94.74%, sys=4.60%, ctx=18, majf=0, minf=0 00:21:25.443 IO depths : 1=1.8%, 2=14.1%, 4=56.2%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:25.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.443 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.443 issued rwts: total=12296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:25.443 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:25.443 filename0: (groupid=0, jobs=1): err= 0: pid=83301: Fri Nov 8 07:48:42 2024 00:21:25.443 read: IOPS=2712, BW=21.2MiB/s (22.2MB/s)(106MiB/5001msec) 00:21:25.443 slat (nsec): min=2804, max=65290, avg=15500.20, stdev=9423.80 00:21:25.443 clat (usec): min=449, max=12815, avg=2903.91, stdev=882.61 00:21:25.443 lat (usec): min=459, max=12828, avg=2919.41, stdev=884.17 00:21:25.443 clat percentiles (usec): 00:21:25.443 | 1.00th=[ 1303], 5.00th=[ 1582], 10.00th=[ 1647], 20.00th=[ 2024], 00:21:25.443 | 30.00th=[ 2671], 40.00th=[ 2868], 50.00th=[ 2999], 60.00th=[ 3163], 00:21:25.443 | 70.00th=[ 3359], 80.00th=[ 3490], 90.00th=[ 3687], 95.00th=[ 3916], 00:21:25.443 | 99.00th=[ 5800], 99.50th=[ 6456], 99.90th=[ 6783], 99.95th=[ 9241], 00:21:25.443 | 99.99th=[12780] 00:21:25.443 bw ( KiB/s): min=15120, max=23503, per=26.20%, avg=21475.44, stdev=2892.78, samples=9 00:21:25.443 iops : min= 1890, max= 2937, avg=2684.33, stdev=361.52, samples=9 00:21:25.443 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.20% 00:21:25.443 lat (msec) : 2=18.72%, 4=76.91%, 10=4.10%, 20=0.04% 00:21:25.443 cpu : usr=94.52%, sys=4.78%, ctx=3, majf=0, minf=1 00:21:25.443 IO depths : 1=1.0%, 2=7.3%, 4=59.8%, 8=31.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:25.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.443 complete : 0=0.0%, 4=97.0%, 8=3.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.443 issued rwts: total=13565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:25.443 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:25.443 filename1: (groupid=0, jobs=1): err= 0: pid=83302: Fri Nov 8 07:48:42 2024 00:21:25.443 read: IOPS=2406, BW=18.8MiB/s (19.7MB/s)(94.1MiB/5003msec) 00:21:25.443 slat (nsec): min=5551, max=83728, avg=18102.92, stdev=11219.07 00:21:25.443 clat (usec): min=1033, max=12951, avg=3258.59, stdev=783.87 00:21:25.443 lat (usec): min=1039, max=12960, avg=3276.70, stdev=785.23 00:21:25.443 clat percentiles (usec): 00:21:25.443 | 1.00th=[ 1500], 5.00th=[ 1713], 10.00th=[ 2073], 20.00th=[ 2769], 00:21:25.443 | 30.00th=[ 2999], 40.00th=[ 3228], 50.00th=[ 3392], 60.00th=[ 3490], 00:21:25.443 | 70.00th=[ 3589], 80.00th=[ 3752], 90.00th=[ 3982], 95.00th=[ 4228], 00:21:25.443 | 99.00th=[ 5604], 99.50th=[ 5932], 99.90th=[ 6718], 99.95th=[ 8160], 00:21:25.443 | 99.99th=[10814] 00:21:25.443 bw ( KiB/s): min=17376, max=23184, per=23.47%, avg=19233.89, stdev=1835.14, samples=9 00:21:25.443 iops : min= 2172, max= 2898, avg=2404.22, stdev=229.40, samples=9 00:21:25.443 lat (msec) : 2=8.43%, 4=82.63%, 10=8.90%, 20=0.03% 00:21:25.443 cpu : usr=94.74%, sys=4.60%, ctx=15, majf=0, minf=0 00:21:25.443 IO depths : 1=2.2%, 2=15.1%, 4=55.6%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:25.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.443 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.443 issued rwts: total=12040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:25.443 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:25.443 filename1: (groupid=0, jobs=1): err= 0: pid=83303: Fri Nov 8 07:48:42 2024 00:21:25.443 read: IOPS=2670, BW=20.9MiB/s (21.9MB/s)(104MiB/5001msec) 00:21:25.443 slat (nsec): min=4584, max=90762, avg=16096.40, stdev=10084.07 00:21:25.443 clat (usec): min=155, max=12831, avg=2945.92, stdev=875.85 00:21:25.443 lat (usec): min=163, max=12845, avg=2962.01, stdev=877.34 00:21:25.443 clat percentiles (usec): 00:21:25.443 | 1.00th=[ 1352], 5.00th=[ 1614], 10.00th=[ 1680], 20.00th=[ 2057], 00:21:25.443 | 30.00th=[ 2704], 40.00th=[ 2900], 50.00th=[ 3032], 60.00th=[ 3228], 00:21:25.443 | 70.00th=[ 3392], 80.00th=[ 3523], 90.00th=[ 3720], 95.00th=[ 3949], 00:21:25.443 | 99.00th=[ 5800], 99.50th=[ 6456], 99.90th=[ 6783], 99.95th=[ 9241], 00:21:25.443 | 99.99th=[12780] 00:21:25.443 bw ( KiB/s): min=15072, max=23519, per=25.96%, avg=21277.67, stdev=2833.35, samples=9 00:21:25.443 iops : min= 1884, max= 2939, avg=2659.56, stdev=354.06, samples=9 00:21:25.443 lat (usec) : 250=0.01%, 1000=0.04% 00:21:25.443 lat (msec) : 2=17.90%, 4=77.75%, 10=4.27%, 20=0.04% 00:21:25.443 cpu : usr=94.06%, sys=5.22%, ctx=48, majf=0, minf=0 00:21:25.443 IO depths : 1=1.1%, 2=8.9%, 4=59.0%, 8=30.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:25.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.443 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.443 issued rwts: total=13355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:25.443 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:25.443 00:21:25.443 Run status group 0 (all jobs): 00:21:25.443 READ: bw=80.0MiB/s (83.9MB/s), 18.8MiB/s-21.2MiB/s (19.7MB/s-22.2MB/s), io=400MiB (420MB), run=5001-5003msec 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.443 07:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:25.444 07:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.444 ************************************ 00:21:25.444 END TEST fio_dif_rand_params 00:21:25.444 ************************************ 00:21:25.444 00:21:25.444 real 0m23.826s 00:21:25.444 user 2m5.313s 00:21:25.444 sys 0m7.373s 00:21:25.444 07:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:25.444 07:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:25.444 07:48:42 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:25.444 07:48:42 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:25.444 07:48:42 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:25.444 07:48:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:25.444 ************************************ 00:21:25.444 START TEST fio_dif_digest 00:21:25.444 ************************************ 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:25.444 bdev_null0 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:25.444 [2024-11-08 07:48:42.878428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:25.444 { 00:21:25.444 "params": { 00:21:25.444 "name": "Nvme$subsystem", 00:21:25.444 "trtype": "$TEST_TRANSPORT", 00:21:25.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.444 "adrfam": "ipv4", 00:21:25.444 "trsvcid": "$NVMF_PORT", 00:21:25.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.444 "hdgst": ${hdgst:-false}, 00:21:25.444 "ddgst": ${ddgst:-false} 00:21:25.444 }, 00:21:25.444 "method": "bdev_nvme_attach_controller" 00:21:25.444 } 00:21:25.444 EOF 00:21:25.444 )") 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:25.444 "params": { 00:21:25.444 "name": "Nvme0", 00:21:25.444 "trtype": "tcp", 00:21:25.444 "traddr": "10.0.0.3", 00:21:25.444 "adrfam": "ipv4", 00:21:25.444 "trsvcid": "4420", 00:21:25.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:25.444 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:25.444 "hdgst": true, 00:21:25.444 "ddgst": true 00:21:25.444 }, 00:21:25.444 "method": "bdev_nvme_attach_controller" 00:21:25.444 }' 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:25.444 07:48:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:25.444 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:25.444 ... 00:21:25.444 fio-3.35 00:21:25.444 Starting 3 threads 00:21:37.656 00:21:37.656 filename0: (groupid=0, jobs=1): err= 0: pid=83410: Fri Nov 8 07:48:53 2024 00:21:37.656 read: IOPS=303, BW=37.9MiB/s (39.7MB/s)(379MiB/10005msec) 00:21:37.656 slat (nsec): min=3131, max=55257, avg=11413.57, stdev=5614.24 00:21:37.656 clat (usec): min=8815, max=11573, avg=9867.78, stdev=276.10 00:21:37.656 lat (usec): min=8822, max=11586, avg=9879.19, stdev=276.19 00:21:37.656 clat percentiles (usec): 00:21:37.656 | 1.00th=[ 9634], 5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[ 9634], 00:21:37.656 | 30.00th=[ 9634], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9765], 00:21:37.656 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10290], 95.00th=[10290], 00:21:37.656 | 99.00th=[10552], 99.50th=[10552], 99.90th=[11600], 99.95th=[11600], 00:21:37.656 | 99.99th=[11600] 00:21:37.656 bw ( KiB/s): min=37632, max=39936, per=33.37%, avg=38844.63, stdev=590.23, samples=19 00:21:37.656 iops : min= 294, max= 312, avg=303.47, stdev= 4.61, samples=19 00:21:37.656 lat (msec) : 10=68.22%, 20=31.78% 00:21:37.656 cpu : usr=94.04%, sys=5.51%, ctx=53, majf=0, minf=0 00:21:37.656 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:37.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.656 issued rwts: total=3033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.656 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:37.656 filename0: (groupid=0, jobs=1): err= 0: pid=83411: Fri Nov 8 07:48:53 2024 00:21:37.656 read: IOPS=303, BW=37.9MiB/s (39.8MB/s)(380MiB/10009msec) 00:21:37.656 slat (nsec): min=5675, max=36773, avg=9634.81, stdev=3635.39 00:21:37.656 clat (usec): min=3742, max=11499, avg=9866.33, stdev=330.65 00:21:37.656 lat (usec): min=3749, max=11514, avg=9875.96, stdev=330.83 00:21:37.656 clat percentiles (usec): 00:21:37.656 | 1.00th=[ 9634], 5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[ 9634], 00:21:37.656 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9765], 60.00th=[ 9765], 00:21:37.656 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10290], 95.00th=[10290], 00:21:37.656 | 99.00th=[10552], 99.50th=[10552], 99.90th=[11469], 99.95th=[11469], 00:21:37.656 | 99.99th=[11469] 00:21:37.656 bw ( KiB/s): min=37632, max=39936, per=33.41%, avg=38885.05, stdev=637.98, samples=19 00:21:37.656 iops : min= 294, max= 312, avg=303.79, stdev= 4.98, samples=19 00:21:37.656 lat (msec) : 4=0.10%, 10=68.21%, 20=31.69% 00:21:37.656 cpu : usr=94.04%, sys=5.46%, ctx=12, majf=0, minf=0 00:21:37.656 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:37.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.656 issued rwts: total=3036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.656 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:37.656 filename0: (groupid=0, jobs=1): err= 0: pid=83412: Fri Nov 8 07:48:53 2024 00:21:37.656 read: IOPS=303, BW=37.9MiB/s (39.7MB/s)(379MiB/10005msec) 00:21:37.656 slat (nsec): min=5592, max=60675, avg=12064.68, stdev=6688.69 00:21:37.656 clat (usec): min=7553, max=12029, avg=9864.93, stdev=284.63 00:21:37.657 lat (usec): min=7560, max=12058, avg=9877.00, stdev=284.99 00:21:37.657 clat percentiles (usec): 00:21:37.657 | 1.00th=[ 9634], 5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[ 9634], 00:21:37.657 | 30.00th=[ 9634], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9765], 00:21:37.657 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10290], 95.00th=[10290], 00:21:37.657 | 99.00th=[10421], 99.50th=[10552], 99.90th=[11600], 99.95th=[11994], 00:21:37.657 | 99.99th=[11994] 00:21:37.657 bw ( KiB/s): min=36937, max=39936, per=33.37%, avg=38848.47, stdev=680.93, samples=19 00:21:37.657 iops : min= 288, max= 312, avg=303.47, stdev= 5.41, samples=19 00:21:37.657 lat (msec) : 10=68.22%, 20=31.78% 00:21:37.657 cpu : usr=94.99%, sys=4.54%, ctx=21, majf=0, minf=0 00:21:37.657 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:37.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.657 issued rwts: total=3033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.657 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:37.657 00:21:37.657 Run status group 0 (all jobs): 00:21:37.657 READ: bw=114MiB/s (119MB/s), 37.9MiB/s-37.9MiB/s (39.7MB/s-39.8MB/s), io=1138MiB (1193MB), run=10005-10009msec 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:37.657 ************************************ 00:21:37.657 END TEST fio_dif_digest 00:21:37.657 ************************************ 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.657 00:21:37.657 real 0m11.106s 00:21:37.657 user 0m28.981s 00:21:37.657 sys 0m1.925s 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:37.657 07:48:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:37.657 07:48:54 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:37.657 07:48:54 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:37.657 07:48:54 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:37.657 07:48:54 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:21:37.657 07:48:54 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:37.657 07:48:54 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:21:37.657 07:48:54 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:37.657 07:48:54 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:37.657 rmmod nvme_tcp 00:21:37.657 rmmod nvme_fabrics 00:21:37.657 rmmod nvme_keyring 00:21:37.657 07:48:54 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:37.657 07:48:54 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:21:37.657 07:48:54 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:21:37.657 07:48:54 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82649 ']' 00:21:37.657 07:48:54 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82649 00:21:37.657 07:48:54 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 82649 ']' 00:21:37.657 07:48:54 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 82649 00:21:37.657 07:48:54 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:21:37.657 07:48:54 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:37.657 07:48:54 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82649 00:21:37.657 killing process with pid 82649 00:21:37.657 07:48:54 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:37.657 07:48:54 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:37.657 07:48:54 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82649' 00:21:37.657 07:48:54 nvmf_dif -- common/autotest_common.sh@971 -- # kill 82649 00:21:37.657 07:48:54 nvmf_dif -- common/autotest_common.sh@976 -- # wait 82649 00:21:37.657 07:48:54 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:37.657 07:48:54 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:37.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:37.657 Waiting for block devices as requested 00:21:37.657 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:37.657 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.657 07:48:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:37.657 07:48:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.657 07:48:55 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:21:37.657 00:21:37.657 real 1m0.530s 00:21:37.657 user 3m48.575s 00:21:37.657 sys 0m20.620s 00:21:37.657 07:48:55 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:37.657 ************************************ 00:21:37.657 END TEST nvmf_dif 00:21:37.657 ************************************ 00:21:37.657 07:48:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:37.657 07:48:55 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:37.657 07:48:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:37.657 07:48:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:37.657 07:48:55 -- common/autotest_common.sh@10 -- # set +x 00:21:37.657 ************************************ 00:21:37.657 START TEST nvmf_abort_qd_sizes 00:21:37.657 ************************************ 00:21:37.657 07:48:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:37.657 * Looking for test storage... 00:21:37.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:37.917 07:48:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:37.917 07:48:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:21:37.917 07:48:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:37.917 07:48:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:37.917 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:37.917 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:37.917 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:37.917 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:21:37.917 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:21:37.917 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:21:37.917 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:37.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.918 --rc genhtml_branch_coverage=1 00:21:37.918 --rc genhtml_function_coverage=1 00:21:37.918 --rc genhtml_legend=1 00:21:37.918 --rc geninfo_all_blocks=1 00:21:37.918 --rc geninfo_unexecuted_blocks=1 00:21:37.918 00:21:37.918 ' 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:37.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.918 --rc genhtml_branch_coverage=1 00:21:37.918 --rc genhtml_function_coverage=1 00:21:37.918 --rc genhtml_legend=1 00:21:37.918 --rc geninfo_all_blocks=1 00:21:37.918 --rc geninfo_unexecuted_blocks=1 00:21:37.918 00:21:37.918 ' 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:37.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.918 --rc genhtml_branch_coverage=1 00:21:37.918 --rc genhtml_function_coverage=1 00:21:37.918 --rc genhtml_legend=1 00:21:37.918 --rc geninfo_all_blocks=1 00:21:37.918 --rc geninfo_unexecuted_blocks=1 00:21:37.918 00:21:37.918 ' 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:37.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.918 --rc genhtml_branch_coverage=1 00:21:37.918 --rc genhtml_function_coverage=1 00:21:37.918 --rc genhtml_legend=1 00:21:37.918 --rc geninfo_all_blocks=1 00:21:37.918 --rc geninfo_unexecuted_blocks=1 00:21:37.918 00:21:37.918 ' 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:37.918 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:37.918 Cannot find device "nvmf_init_br" 00:21:37.918 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:37.919 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:37.919 Cannot find device "nvmf_init_br2" 00:21:37.919 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:37.919 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:37.919 Cannot find device "nvmf_tgt_br" 00:21:37.919 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:21:37.919 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:37.919 Cannot find device "nvmf_tgt_br2" 00:21:37.919 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:21:37.919 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:37.919 Cannot find device "nvmf_init_br" 00:21:37.919 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:21:37.919 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:38.178 Cannot find device "nvmf_init_br2" 00:21:38.178 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:21:38.178 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:38.178 Cannot find device "nvmf_tgt_br" 00:21:38.178 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:38.179 Cannot find device "nvmf_tgt_br2" 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:38.179 Cannot find device "nvmf_br" 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:38.179 Cannot find device "nvmf_init_if" 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:38.179 Cannot find device "nvmf_init_if2" 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:38.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:38.179 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:38.179 07:48:55 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:38.179 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:38.179 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:38.179 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:38.179 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:38.179 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:38.179 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:38.179 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:38.439 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:38.439 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:21:38.439 00:21:38.439 --- 10.0.0.3 ping statistics --- 00:21:38.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.439 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:38.439 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:38.439 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.112 ms 00:21:38.439 00:21:38.439 --- 10.0.0.4 ping statistics --- 00:21:38.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.439 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:38.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:21:38.439 00:21:38.439 --- 10.0.0.1 ping statistics --- 00:21:38.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.439 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:38.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:21:38.439 00:21:38.439 --- 10.0.0.2 ping statistics --- 00:21:38.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.439 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:38.439 07:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:39.377 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:39.378 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:39.378 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:39.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84074 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84074 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 84074 ']' 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:39.637 07:48:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:39.637 [2024-11-08 07:48:57.494694] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:21:39.637 [2024-11-08 07:48:57.495049] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.897 [2024-11-08 07:48:57.653215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.897 [2024-11-08 07:48:57.728482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.897 [2024-11-08 07:48:57.728853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.897 [2024-11-08 07:48:57.728880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.897 [2024-11-08 07:48:57.728895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.897 [2024-11-08 07:48:57.728908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.897 [2024-11-08 07:48:57.730426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.897 [2024-11-08 07:48:57.730631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.897 [2024-11-08 07:48:57.730730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.897 [2024-11-08 07:48:57.730732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.897 [2024-11-08 07:48:57.801918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:40.835 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:40.836 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:40.836 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:40.836 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:40.836 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:21:40.836 07:48:58 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:40.836 07:48:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:40.836 07:48:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:40.836 07:48:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:40.836 07:48:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:40.836 07:48:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:40.836 07:48:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:40.836 ************************************ 00:21:40.836 START TEST spdk_target_abort 00:21:40.836 ************************************ 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.836 spdk_targetn1 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.836 [2024-11-08 07:48:58.684294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.836 [2024-11-08 07:48:58.725727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:40.836 07:48:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:44.124 Initializing NVMe Controllers 00:21:44.124 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:44.124 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:44.124 Initialization complete. Launching workers. 00:21:44.124 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12074, failed: 0 00:21:44.124 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1026, failed to submit 11048 00:21:44.124 success 835, unsuccessful 191, failed 0 00:21:44.124 07:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:44.124 07:49:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:47.413 Initializing NVMe Controllers 00:21:47.413 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:47.413 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:47.413 Initialization complete. Launching workers. 00:21:47.413 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8952, failed: 0 00:21:47.413 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1154, failed to submit 7798 00:21:47.413 success 363, unsuccessful 791, failed 0 00:21:47.672 07:49:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:47.672 07:49:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:50.961 Initializing NVMe Controllers 00:21:50.961 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:50.961 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:50.961 Initialization complete. Launching workers. 00:21:50.961 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34811, failed: 0 00:21:50.961 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2338, failed to submit 32473 00:21:50.961 success 572, unsuccessful 1766, failed 0 00:21:50.961 07:49:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:50.961 07:49:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.961 07:49:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:50.961 07:49:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.961 07:49:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:50.961 07:49:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.961 07:49:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:51.529 07:49:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.529 07:49:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84074 00:21:51.529 07:49:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 84074 ']' 00:21:51.529 07:49:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 84074 00:21:51.529 07:49:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:21:51.529 07:49:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:51.529 07:49:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84074 00:21:51.529 killing process with pid 84074 00:21:51.529 07:49:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:51.529 07:49:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:51.529 07:49:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84074' 00:21:51.529 07:49:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 84074 00:21:51.529 07:49:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 84074 00:21:51.529 00:21:51.529 real 0m10.864s 00:21:51.529 user 0m44.190s 00:21:51.529 sys 0m2.516s 00:21:51.529 07:49:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:51.529 07:49:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:51.529 ************************************ 00:21:51.529 END TEST spdk_target_abort 00:21:51.529 ************************************ 00:21:51.788 07:49:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:51.788 07:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:51.788 07:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:51.788 07:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:51.788 ************************************ 00:21:51.788 START TEST kernel_target_abort 00:21:51.788 ************************************ 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:51.788 07:49:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:52.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:52.306 Waiting for block devices as requested 00:21:52.306 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:52.306 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:52.566 No valid GPT data, bailing 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:52.566 No valid GPT data, bailing 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:52.566 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:52.828 No valid GPT data, bailing 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:52.828 No valid GPT data, bailing 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:52.828 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf --hostid=b4f53fcb-853f-493d-bd98-9a37948dacaf -a 10.0.0.1 -t tcp -s 4420 00:21:52.828 00:21:52.828 Discovery Log Number of Records 2, Generation counter 2 00:21:52.828 =====Discovery Log Entry 0====== 00:21:52.828 trtype: tcp 00:21:52.829 adrfam: ipv4 00:21:52.829 subtype: current discovery subsystem 00:21:52.829 treq: not specified, sq flow control disable supported 00:21:52.829 portid: 1 00:21:52.829 trsvcid: 4420 00:21:52.829 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:52.829 traddr: 10.0.0.1 00:21:52.829 eflags: none 00:21:52.829 sectype: none 00:21:52.829 =====Discovery Log Entry 1====== 00:21:52.829 trtype: tcp 00:21:52.829 adrfam: ipv4 00:21:52.829 subtype: nvme subsystem 00:21:52.829 treq: not specified, sq flow control disable supported 00:21:52.829 portid: 1 00:21:52.829 trsvcid: 4420 00:21:52.829 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:52.829 traddr: 10.0.0.1 00:21:52.829 eflags: none 00:21:52.829 sectype: none 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:52.829 07:49:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:56.154 Initializing NVMe Controllers 00:21:56.154 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:56.154 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:56.154 Initialization complete. Launching workers. 00:21:56.154 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39855, failed: 0 00:21:56.154 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39855, failed to submit 0 00:21:56.154 success 0, unsuccessful 39855, failed 0 00:21:56.154 07:49:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:56.154 07:49:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:59.446 Initializing NVMe Controllers 00:21:59.446 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:59.446 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:59.446 Initialization complete. Launching workers. 00:21:59.446 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75312, failed: 0 00:21:59.446 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32765, failed to submit 42547 00:21:59.446 success 0, unsuccessful 32765, failed 0 00:21:59.446 07:49:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:59.446 07:49:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:02.737 Initializing NVMe Controllers 00:22:02.737 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:02.737 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:02.737 Initialization complete. Launching workers. 00:22:02.737 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89881, failed: 0 00:22:02.737 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22478, failed to submit 67403 00:22:02.737 success 0, unsuccessful 22478, failed 0 00:22:02.737 07:49:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:22:02.737 07:49:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:02.737 07:49:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:22:02.737 07:49:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:02.737 07:49:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:02.737 07:49:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:02.737 07:49:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:02.737 07:49:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:02.737 07:49:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:02.737 07:49:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:03.307 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:05.845 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:05.845 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:05.845 00:22:05.845 real 0m13.929s 00:22:05.845 user 0m6.339s 00:22:05.845 sys 0m5.045s 00:22:05.845 07:49:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:05.845 07:49:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:05.845 ************************************ 00:22:05.845 END TEST kernel_target_abort 00:22:05.845 ************************************ 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:05.845 rmmod nvme_tcp 00:22:05.845 rmmod nvme_fabrics 00:22:05.845 rmmod nvme_keyring 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84074 ']' 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84074 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 84074 ']' 00:22:05.845 Process with pid 84074 is not found 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 84074 00:22:05.845 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (84074) - No such process 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 84074 is not found' 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:22:05.845 07:49:23 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:06.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:06.414 Waiting for block devices as requested 00:22:06.414 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:06.414 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:06.674 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:06.934 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:06.934 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:06.934 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:06.934 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.934 07:49:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:06.934 07:49:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.934 07:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:22:06.934 00:22:06.934 real 0m29.223s 00:22:06.934 user 0m52.014s 00:22:06.934 sys 0m9.603s 00:22:06.934 07:49:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:06.934 07:49:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:06.934 ************************************ 00:22:06.934 END TEST nvmf_abort_qd_sizes 00:22:06.934 ************************************ 00:22:06.934 07:49:24 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:06.934 07:49:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:06.934 07:49:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:06.934 07:49:24 -- common/autotest_common.sh@10 -- # set +x 00:22:06.934 ************************************ 00:22:06.934 START TEST keyring_file 00:22:06.934 ************************************ 00:22:06.934 07:49:24 keyring_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:06.934 * Looking for test storage... 00:22:06.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:06.934 07:49:24 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:06.934 07:49:24 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:22:07.196 07:49:24 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:07.196 07:49:24 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@345 -- # : 1 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@353 -- # local d=1 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@355 -- # echo 1 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.196 07:49:24 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:22:07.196 07:49:25 keyring_file -- scripts/common.sh@353 -- # local d=2 00:22:07.196 07:49:25 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.196 07:49:25 keyring_file -- scripts/common.sh@355 -- # echo 2 00:22:07.196 07:49:25 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.196 07:49:25 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.196 07:49:25 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.196 07:49:25 keyring_file -- scripts/common.sh@368 -- # return 0 00:22:07.196 07:49:25 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.196 07:49:25 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:07.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.196 --rc genhtml_branch_coverage=1 00:22:07.196 --rc genhtml_function_coverage=1 00:22:07.196 --rc genhtml_legend=1 00:22:07.196 --rc geninfo_all_blocks=1 00:22:07.196 --rc geninfo_unexecuted_blocks=1 00:22:07.196 00:22:07.196 ' 00:22:07.196 07:49:25 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:07.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.196 --rc genhtml_branch_coverage=1 00:22:07.196 --rc genhtml_function_coverage=1 00:22:07.196 --rc genhtml_legend=1 00:22:07.196 --rc geninfo_all_blocks=1 00:22:07.196 --rc geninfo_unexecuted_blocks=1 00:22:07.196 00:22:07.196 ' 00:22:07.196 07:49:25 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:07.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.196 --rc genhtml_branch_coverage=1 00:22:07.196 --rc genhtml_function_coverage=1 00:22:07.196 --rc genhtml_legend=1 00:22:07.196 --rc geninfo_all_blocks=1 00:22:07.196 --rc geninfo_unexecuted_blocks=1 00:22:07.196 00:22:07.196 ' 00:22:07.196 07:49:25 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:07.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.196 --rc genhtml_branch_coverage=1 00:22:07.196 --rc genhtml_function_coverage=1 00:22:07.196 --rc genhtml_legend=1 00:22:07.196 --rc geninfo_all_blocks=1 00:22:07.196 --rc geninfo_unexecuted_blocks=1 00:22:07.196 00:22:07.196 ' 00:22:07.196 07:49:25 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:07.196 07:49:25 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:07.196 07:49:25 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:22:07.196 07:49:25 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.196 07:49:25 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.196 07:49:25 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.196 07:49:25 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.196 07:49:25 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.196 07:49:25 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.196 07:49:25 keyring_file -- paths/export.sh@5 -- # export PATH 00:22:07.196 07:49:25 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@51 -- # : 0 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:07.196 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:07.196 07:49:25 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:07.196 07:49:25 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:07.196 07:49:25 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:07.196 07:49:25 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:22:07.196 07:49:25 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:22:07.196 07:49:25 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:22:07.196 07:49:25 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:07.196 07:49:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:07.196 07:49:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:07.196 07:49:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:07.196 07:49:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:07.196 07:49:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:07.196 07:49:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.w6sUrna5s4 00:22:07.196 07:49:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:07.196 07:49:25 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:07.197 07:49:25 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:07.197 07:49:25 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:07.197 07:49:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.w6sUrna5s4 00:22:07.197 07:49:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.w6sUrna5s4 00:22:07.197 07:49:25 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.w6sUrna5s4 00:22:07.197 07:49:25 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:22:07.197 07:49:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:07.197 07:49:25 keyring_file -- keyring/common.sh@17 -- # name=key1 00:22:07.197 07:49:25 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:07.197 07:49:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:07.197 07:49:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:07.197 07:49:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6pEiJ3x43d 00:22:07.197 07:49:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:07.197 07:49:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:07.197 07:49:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:07.197 07:49:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:07.197 07:49:25 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:22:07.197 07:49:25 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:07.197 07:49:25 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:07.456 07:49:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6pEiJ3x43d 00:22:07.457 07:49:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6pEiJ3x43d 00:22:07.457 07:49:25 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.6pEiJ3x43d 00:22:07.457 07:49:25 keyring_file -- keyring/file.sh@30 -- # tgtpid=84993 00:22:07.457 07:49:25 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84993 00:22:07.457 07:49:25 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:07.457 07:49:25 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 84993 ']' 00:22:07.457 07:49:25 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.457 07:49:25 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:07.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.457 07:49:25 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.457 07:49:25 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:07.457 07:49:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:07.457 [2024-11-08 07:49:25.257711] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:22:07.457 [2024-11-08 07:49:25.257829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84993 ] 00:22:07.457 [2024-11-08 07:49:25.414239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.716 [2024-11-08 07:49:25.485171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.716 [2024-11-08 07:49:25.579429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:22:08.655 07:49:26 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:08.655 [2024-11-08 07:49:26.252351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.655 null0 00:22:08.655 [2024-11-08 07:49:26.284326] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:08.655 [2024-11-08 07:49:26.284477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.655 07:49:26 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:08.655 [2024-11-08 07:49:26.316321] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:22:08.655 request: 00:22:08.655 { 00:22:08.655 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:22:08.655 "secure_channel": false, 00:22:08.655 "listen_address": { 00:22:08.655 "trtype": "tcp", 00:22:08.655 "traddr": "127.0.0.1", 00:22:08.655 "trsvcid": "4420" 00:22:08.655 }, 00:22:08.655 "method": "nvmf_subsystem_add_listener", 00:22:08.655 "req_id": 1 00:22:08.655 } 00:22:08.655 Got JSON-RPC error response 00:22:08.655 response: 00:22:08.655 { 00:22:08.655 "code": -32602, 00:22:08.655 "message": "Invalid parameters" 00:22:08.655 } 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:08.655 07:49:26 keyring_file -- keyring/file.sh@47 -- # bperfpid=85010 00:22:08.655 07:49:26 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85010 /var/tmp/bperf.sock 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85010 ']' 00:22:08.655 07:49:26 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:08.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:08.655 07:49:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:08.655 [2024-11-08 07:49:26.381472] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:22:08.656 [2024-11-08 07:49:26.381560] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85010 ] 00:22:08.656 [2024-11-08 07:49:26.539543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.656 [2024-11-08 07:49:26.594043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.915 [2024-11-08 07:49:26.643226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:08.915 07:49:26 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:08.915 07:49:26 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:22:08.915 07:49:26 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.w6sUrna5s4 00:22:08.915 07:49:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.w6sUrna5s4 00:22:09.254 07:49:26 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6pEiJ3x43d 00:22:09.254 07:49:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6pEiJ3x43d 00:22:09.254 07:49:27 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:22:09.255 07:49:27 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:22:09.255 07:49:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:09.255 07:49:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:09.255 07:49:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:09.519 07:49:27 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.w6sUrna5s4 == \/\t\m\p\/\t\m\p\.\w\6\s\U\r\n\a\5\s\4 ]] 00:22:09.519 07:49:27 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:22:09.519 07:49:27 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:22:09.519 07:49:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:09.519 07:49:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:09.519 07:49:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:09.778 07:49:27 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.6pEiJ3x43d == \/\t\m\p\/\t\m\p\.\6\p\E\i\J\3\x\4\3\d ]] 00:22:09.778 07:49:27 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:22:09.778 07:49:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:09.778 07:49:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:09.778 07:49:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:09.778 07:49:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:09.778 07:49:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.038 07:49:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:22:10.038 07:49:27 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:22:10.038 07:49:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:10.038 07:49:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:10.038 07:49:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:10.038 07:49:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.038 07:49:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.298 07:49:28 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:22:10.298 07:49:28 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:10.298 07:49:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:10.557 [2024-11-08 07:49:28.299685] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:10.557 nvme0n1 00:22:10.557 07:49:28 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:22:10.557 07:49:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:10.557 07:49:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:10.557 07:49:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.557 07:49:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.557 07:49:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:10.816 07:49:28 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:22:10.816 07:49:28 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:22:10.816 07:49:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:10.816 07:49:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:10.816 07:49:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:10.816 07:49:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:10.816 07:49:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:11.074 07:49:28 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:22:11.074 07:49:28 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:11.332 Running I/O for 1 seconds... 00:22:12.267 17152.00 IOPS, 67.00 MiB/s 00:22:12.267 Latency(us) 00:22:12.267 [2024-11-08T07:49:30.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.268 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:22:12.268 nvme0n1 : 1.00 17199.13 67.18 0.00 0.00 7428.41 3011.54 10860.25 00:22:12.268 [2024-11-08T07:49:30.229Z] =================================================================================================================== 00:22:12.268 [2024-11-08T07:49:30.229Z] Total : 17199.13 67.18 0.00 0.00 7428.41 3011.54 10860.25 00:22:12.268 { 00:22:12.268 "results": [ 00:22:12.268 { 00:22:12.268 "job": "nvme0n1", 00:22:12.268 "core_mask": "0x2", 00:22:12.268 "workload": "randrw", 00:22:12.268 "percentage": 50, 00:22:12.268 "status": "finished", 00:22:12.268 "queue_depth": 128, 00:22:12.268 "io_size": 4096, 00:22:12.268 "runtime": 1.00476, 00:22:12.268 "iops": 17199.132131056173, 00:22:12.268 "mibps": 67.18410988693817, 00:22:12.268 "io_failed": 0, 00:22:12.268 "io_timeout": 0, 00:22:12.268 "avg_latency_us": 7428.405219495125, 00:22:12.268 "min_latency_us": 3011.535238095238, 00:22:12.268 "max_latency_us": 10860.251428571428 00:22:12.268 } 00:22:12.268 ], 00:22:12.268 "core_count": 1 00:22:12.268 } 00:22:12.268 07:49:30 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:12.268 07:49:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:12.527 07:49:30 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:22:12.527 07:49:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:12.527 07:49:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:12.527 07:49:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:12.527 07:49:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:12.527 07:49:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:12.786 07:49:30 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:22:12.786 07:49:30 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:22:12.786 07:49:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:12.786 07:49:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:12.786 07:49:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:12.786 07:49:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:12.786 07:49:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:13.045 07:49:30 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:22:13.045 07:49:30 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:13.045 07:49:30 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:22:13.045 07:49:30 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:13.045 07:49:30 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:13.045 07:49:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.045 07:49:30 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:13.045 07:49:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.045 07:49:30 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:13.045 07:49:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:13.304 [2024-11-08 07:49:31.167561] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:13.304 [2024-11-08 07:49:31.167848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fa770 (107): Transport endpoint is not connected 00:22:13.304 [2024-11-08 07:49:31.168838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fa770 (9): Bad file descriptor 00:22:13.304 [2024-11-08 07:49:31.169837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:22:13.304 [2024-11-08 07:49:31.169853] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:13.304 [2024-11-08 07:49:31.169862] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:13.304 [2024-11-08 07:49:31.169873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:22:13.304 request: 00:22:13.304 { 00:22:13.304 "name": "nvme0", 00:22:13.304 "trtype": "tcp", 00:22:13.304 "traddr": "127.0.0.1", 00:22:13.304 "adrfam": "ipv4", 00:22:13.304 "trsvcid": "4420", 00:22:13.304 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.304 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:13.304 "prchk_reftag": false, 00:22:13.304 "prchk_guard": false, 00:22:13.304 "hdgst": false, 00:22:13.304 "ddgst": false, 00:22:13.304 "psk": "key1", 00:22:13.304 "allow_unrecognized_csi": false, 00:22:13.304 "method": "bdev_nvme_attach_controller", 00:22:13.304 "req_id": 1 00:22:13.304 } 00:22:13.304 Got JSON-RPC error response 00:22:13.304 response: 00:22:13.304 { 00:22:13.304 "code": -5, 00:22:13.304 "message": "Input/output error" 00:22:13.304 } 00:22:13.304 07:49:31 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:22:13.304 07:49:31 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:13.304 07:49:31 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:13.304 07:49:31 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:13.304 07:49:31 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:22:13.304 07:49:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:13.304 07:49:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:13.304 07:49:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:13.304 07:49:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:13.304 07:49:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:13.562 07:49:31 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:22:13.562 07:49:31 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:22:13.562 07:49:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:13.562 07:49:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:13.562 07:49:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:13.562 07:49:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:13.562 07:49:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:13.820 07:49:31 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:22:13.820 07:49:31 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:22:13.820 07:49:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:13.820 07:49:31 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:22:13.820 07:49:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:22:14.078 07:49:31 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:22:14.078 07:49:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:14.078 07:49:31 keyring_file -- keyring/file.sh@78 -- # jq length 00:22:14.337 07:49:32 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:22:14.337 07:49:32 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.w6sUrna5s4 00:22:14.337 07:49:32 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.w6sUrna5s4 00:22:14.337 07:49:32 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:22:14.337 07:49:32 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.w6sUrna5s4 00:22:14.337 07:49:32 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:14.337 07:49:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.338 07:49:32 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:14.338 07:49:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.338 07:49:32 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.w6sUrna5s4 00:22:14.338 07:49:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.w6sUrna5s4 00:22:14.597 [2024-11-08 07:49:32.393256] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.w6sUrna5s4': 0100660 00:22:14.597 [2024-11-08 07:49:32.393291] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:14.597 request: 00:22:14.597 { 00:22:14.597 "name": "key0", 00:22:14.597 "path": "/tmp/tmp.w6sUrna5s4", 00:22:14.597 "method": "keyring_file_add_key", 00:22:14.597 "req_id": 1 00:22:14.597 } 00:22:14.597 Got JSON-RPC error response 00:22:14.597 response: 00:22:14.597 { 00:22:14.597 "code": -1, 00:22:14.597 "message": "Operation not permitted" 00:22:14.597 } 00:22:14.597 07:49:32 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:22:14.597 07:49:32 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:14.597 07:49:32 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:14.597 07:49:32 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:14.597 07:49:32 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.w6sUrna5s4 00:22:14.597 07:49:32 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.w6sUrna5s4 00:22:14.597 07:49:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.w6sUrna5s4 00:22:14.855 07:49:32 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.w6sUrna5s4 00:22:14.855 07:49:32 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:22:14.855 07:49:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:14.855 07:49:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:14.855 07:49:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:14.855 07:49:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:14.856 07:49:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:14.856 07:49:32 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:22:14.856 07:49:32 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:14.856 07:49:32 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:22:14.856 07:49:32 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:14.856 07:49:32 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:14.856 07:49:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.856 07:49:32 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:14.856 07:49:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:14.856 07:49:32 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:14.856 07:49:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:15.114 [2024-11-08 07:49:33.065400] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.w6sUrna5s4': No such file or directory 00:22:15.114 [2024-11-08 07:49:33.065434] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:22:15.114 [2024-11-08 07:49:33.065467] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:22:15.114 [2024-11-08 07:49:33.065476] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:22:15.114 [2024-11-08 07:49:33.065485] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:15.114 [2024-11-08 07:49:33.065494] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:22:15.114 request: 00:22:15.114 { 00:22:15.114 "name": "nvme0", 00:22:15.114 "trtype": "tcp", 00:22:15.114 "traddr": "127.0.0.1", 00:22:15.114 "adrfam": "ipv4", 00:22:15.114 "trsvcid": "4420", 00:22:15.114 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:15.114 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:15.114 "prchk_reftag": false, 00:22:15.114 "prchk_guard": false, 00:22:15.114 "hdgst": false, 00:22:15.114 "ddgst": false, 00:22:15.114 "psk": "key0", 00:22:15.114 "allow_unrecognized_csi": false, 00:22:15.114 "method": "bdev_nvme_attach_controller", 00:22:15.114 "req_id": 1 00:22:15.114 } 00:22:15.114 Got JSON-RPC error response 00:22:15.114 response: 00:22:15.114 { 00:22:15.114 "code": -19, 00:22:15.114 "message": "No such device" 00:22:15.114 } 00:22:15.372 07:49:33 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:22:15.372 07:49:33 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:15.372 07:49:33 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:15.372 07:49:33 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:15.372 07:49:33 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:22:15.372 07:49:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:15.640 07:49:33 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:15.640 07:49:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:15.640 07:49:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:15.640 07:49:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:15.640 07:49:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:15.640 07:49:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:15.640 07:49:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.V3Ym0cbEiz 00:22:15.640 07:49:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:15.640 07:49:33 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:15.640 07:49:33 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:15.640 07:49:33 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:15.640 07:49:33 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:15.640 07:49:33 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:15.640 07:49:33 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:15.640 07:49:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.V3Ym0cbEiz 00:22:15.640 07:49:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.V3Ym0cbEiz 00:22:15.640 07:49:33 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.V3Ym0cbEiz 00:22:15.640 07:49:33 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.V3Ym0cbEiz 00:22:15.640 07:49:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.V3Ym0cbEiz 00:22:15.901 07:49:33 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:15.901 07:49:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:16.159 nvme0n1 00:22:16.159 07:49:33 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:22:16.159 07:49:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:16.159 07:49:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:16.159 07:49:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:16.159 07:49:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:16.159 07:49:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:16.418 07:49:34 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:22:16.418 07:49:34 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:22:16.418 07:49:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:16.418 07:49:34 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:22:16.418 07:49:34 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:22:16.418 07:49:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:16.418 07:49:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:16.418 07:49:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:16.676 07:49:34 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:22:16.676 07:49:34 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:22:16.676 07:49:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:16.676 07:49:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:16.676 07:49:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:16.676 07:49:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:16.676 07:49:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:16.935 07:49:34 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:22:16.935 07:49:34 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:16.935 07:49:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:17.192 07:49:35 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:22:17.192 07:49:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:17.192 07:49:35 keyring_file -- keyring/file.sh@105 -- # jq length 00:22:17.449 07:49:35 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:22:17.449 07:49:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.V3Ym0cbEiz 00:22:17.449 07:49:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.V3Ym0cbEiz 00:22:17.707 07:49:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6pEiJ3x43d 00:22:17.707 07:49:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6pEiJ3x43d 00:22:17.965 07:49:35 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:17.965 07:49:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:18.224 nvme0n1 00:22:18.224 07:49:36 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:22:18.224 07:49:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:18.484 07:49:36 keyring_file -- keyring/file.sh@113 -- # config='{ 00:22:18.484 "subsystems": [ 00:22:18.484 { 00:22:18.484 "subsystem": "keyring", 00:22:18.484 "config": [ 00:22:18.484 { 00:22:18.484 "method": "keyring_file_add_key", 00:22:18.484 "params": { 00:22:18.484 "name": "key0", 00:22:18.484 "path": "/tmp/tmp.V3Ym0cbEiz" 00:22:18.484 } 00:22:18.484 }, 00:22:18.484 { 00:22:18.484 "method": "keyring_file_add_key", 00:22:18.484 "params": { 00:22:18.484 "name": "key1", 00:22:18.484 "path": "/tmp/tmp.6pEiJ3x43d" 00:22:18.484 } 00:22:18.484 } 00:22:18.484 ] 00:22:18.484 }, 00:22:18.484 { 00:22:18.484 "subsystem": "iobuf", 00:22:18.484 "config": [ 00:22:18.484 { 00:22:18.484 "method": "iobuf_set_options", 00:22:18.484 "params": { 00:22:18.484 "small_pool_count": 8192, 00:22:18.484 "large_pool_count": 1024, 00:22:18.484 "small_bufsize": 8192, 00:22:18.484 "large_bufsize": 135168, 00:22:18.484 "enable_numa": false 00:22:18.484 } 00:22:18.484 } 00:22:18.484 ] 00:22:18.484 }, 00:22:18.484 { 00:22:18.484 "subsystem": "sock", 00:22:18.484 "config": [ 00:22:18.484 { 00:22:18.484 "method": "sock_set_default_impl", 00:22:18.484 "params": { 00:22:18.484 "impl_name": "uring" 00:22:18.484 } 00:22:18.484 }, 00:22:18.484 { 00:22:18.484 "method": "sock_impl_set_options", 00:22:18.484 "params": { 00:22:18.484 "impl_name": "ssl", 00:22:18.484 "recv_buf_size": 4096, 00:22:18.484 "send_buf_size": 4096, 00:22:18.484 "enable_recv_pipe": true, 00:22:18.484 "enable_quickack": false, 00:22:18.484 "enable_placement_id": 0, 00:22:18.484 "enable_zerocopy_send_server": true, 00:22:18.484 "enable_zerocopy_send_client": false, 00:22:18.484 "zerocopy_threshold": 0, 00:22:18.484 "tls_version": 0, 00:22:18.484 "enable_ktls": false 00:22:18.484 } 00:22:18.484 }, 00:22:18.484 { 00:22:18.484 "method": "sock_impl_set_options", 00:22:18.484 "params": { 00:22:18.484 "impl_name": "posix", 00:22:18.484 "recv_buf_size": 2097152, 00:22:18.484 "send_buf_size": 2097152, 00:22:18.484 "enable_recv_pipe": true, 00:22:18.484 "enable_quickack": false, 00:22:18.484 "enable_placement_id": 0, 00:22:18.484 "enable_zerocopy_send_server": true, 00:22:18.484 "enable_zerocopy_send_client": false, 00:22:18.484 "zerocopy_threshold": 0, 00:22:18.484 "tls_version": 0, 00:22:18.484 "enable_ktls": false 00:22:18.484 } 00:22:18.484 }, 00:22:18.484 { 00:22:18.484 "method": "sock_impl_set_options", 00:22:18.484 "params": { 00:22:18.484 "impl_name": "uring", 00:22:18.484 "recv_buf_size": 2097152, 00:22:18.484 "send_buf_size": 2097152, 00:22:18.484 "enable_recv_pipe": true, 00:22:18.484 "enable_quickack": false, 00:22:18.484 "enable_placement_id": 0, 00:22:18.484 "enable_zerocopy_send_server": false, 00:22:18.484 "enable_zerocopy_send_client": false, 00:22:18.484 "zerocopy_threshold": 0, 00:22:18.484 "tls_version": 0, 00:22:18.484 "enable_ktls": false 00:22:18.484 } 00:22:18.484 } 00:22:18.484 ] 00:22:18.484 }, 00:22:18.484 { 00:22:18.484 "subsystem": "vmd", 00:22:18.484 "config": [] 00:22:18.484 }, 00:22:18.484 { 00:22:18.484 "subsystem": "accel", 00:22:18.484 "config": [ 00:22:18.484 { 00:22:18.484 "method": "accel_set_options", 00:22:18.484 "params": { 00:22:18.484 "small_cache_size": 128, 00:22:18.484 "large_cache_size": 16, 00:22:18.484 "task_count": 2048, 00:22:18.484 "sequence_count": 2048, 00:22:18.484 "buf_count": 2048 00:22:18.484 } 00:22:18.484 } 00:22:18.484 ] 00:22:18.484 }, 00:22:18.484 { 00:22:18.484 "subsystem": "bdev", 00:22:18.484 "config": [ 00:22:18.484 { 00:22:18.484 "method": "bdev_set_options", 00:22:18.484 "params": { 00:22:18.484 "bdev_io_pool_size": 65535, 00:22:18.484 "bdev_io_cache_size": 256, 00:22:18.484 "bdev_auto_examine": true, 00:22:18.484 "iobuf_small_cache_size": 128, 00:22:18.484 "iobuf_large_cache_size": 16 00:22:18.484 } 00:22:18.484 }, 00:22:18.484 { 00:22:18.484 "method": "bdev_raid_set_options", 00:22:18.484 "params": { 00:22:18.484 "process_window_size_kb": 1024, 00:22:18.484 "process_max_bandwidth_mb_sec": 0 00:22:18.484 } 00:22:18.484 }, 00:22:18.484 { 00:22:18.484 "method": "bdev_iscsi_set_options", 00:22:18.484 "params": { 00:22:18.484 "timeout_sec": 30 00:22:18.484 } 00:22:18.484 }, 00:22:18.484 { 00:22:18.484 "method": "bdev_nvme_set_options", 00:22:18.484 "params": { 00:22:18.484 "action_on_timeout": "none", 00:22:18.484 "timeout_us": 0, 00:22:18.484 "timeout_admin_us": 0, 00:22:18.484 "keep_alive_timeout_ms": 10000, 00:22:18.484 "arbitration_burst": 0, 00:22:18.484 "low_priority_weight": 0, 00:22:18.484 "medium_priority_weight": 0, 00:22:18.484 "high_priority_weight": 0, 00:22:18.484 "nvme_adminq_poll_period_us": 10000, 00:22:18.484 "nvme_ioq_poll_period_us": 0, 00:22:18.484 "io_queue_requests": 512, 00:22:18.484 "delay_cmd_submit": true, 00:22:18.484 "transport_retry_count": 4, 00:22:18.484 "bdev_retry_count": 3, 00:22:18.484 "transport_ack_timeout": 0, 00:22:18.484 "ctrlr_loss_timeout_sec": 0, 00:22:18.485 "reconnect_delay_sec": 0, 00:22:18.485 "fast_io_fail_timeout_sec": 0, 00:22:18.485 "disable_auto_failback": false, 00:22:18.485 "generate_uuids": false, 00:22:18.485 "transport_tos": 0, 00:22:18.485 "nvme_error_stat": false, 00:22:18.485 "rdma_srq_size": 0, 00:22:18.485 "io_path_stat": false, 00:22:18.485 "allow_accel_sequence": false, 00:22:18.485 "rdma_max_cq_size": 0, 00:22:18.485 "rdma_cm_event_timeout_ms": 0, 00:22:18.485 "dhchap_digests": [ 00:22:18.485 "sha256", 00:22:18.485 "sha384", 00:22:18.485 "sha512" 00:22:18.485 ], 00:22:18.485 "dhchap_dhgroups": [ 00:22:18.485 "null", 00:22:18.485 "ffdhe2048", 00:22:18.485 "ffdhe3072", 00:22:18.485 "ffdhe4096", 00:22:18.485 "ffdhe6144", 00:22:18.485 "ffdhe8192" 00:22:18.485 ] 00:22:18.485 } 00:22:18.485 }, 00:22:18.485 { 00:22:18.485 "method": "bdev_nvme_attach_controller", 00:22:18.485 "params": { 00:22:18.485 "name": "nvme0", 00:22:18.485 "trtype": "TCP", 00:22:18.485 "adrfam": "IPv4", 00:22:18.485 "traddr": "127.0.0.1", 00:22:18.485 "trsvcid": "4420", 00:22:18.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:18.485 "prchk_reftag": false, 00:22:18.485 "prchk_guard": false, 00:22:18.485 "ctrlr_loss_timeout_sec": 0, 00:22:18.485 "reconnect_delay_sec": 0, 00:22:18.485 "fast_io_fail_timeout_sec": 0, 00:22:18.485 "psk": "key0", 00:22:18.485 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:18.485 "hdgst": false, 00:22:18.485 "ddgst": false, 00:22:18.485 "multipath": "multipath" 00:22:18.485 } 00:22:18.485 }, 00:22:18.485 { 00:22:18.485 "method": "bdev_nvme_set_hotplug", 00:22:18.485 "params": { 00:22:18.485 "period_us": 100000, 00:22:18.485 "enable": false 00:22:18.485 } 00:22:18.485 }, 00:22:18.485 { 00:22:18.485 "method": "bdev_wait_for_examine" 00:22:18.485 } 00:22:18.485 ] 00:22:18.485 }, 00:22:18.485 { 00:22:18.485 "subsystem": "nbd", 00:22:18.485 "config": [] 00:22:18.485 } 00:22:18.485 ] 00:22:18.485 }' 00:22:18.485 07:49:36 keyring_file -- keyring/file.sh@115 -- # killprocess 85010 00:22:18.485 07:49:36 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85010 ']' 00:22:18.485 07:49:36 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85010 00:22:18.485 07:49:36 keyring_file -- common/autotest_common.sh@957 -- # uname 00:22:18.485 07:49:36 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:18.485 07:49:36 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85010 00:22:18.485 killing process with pid 85010 00:22:18.485 Received shutdown signal, test time was about 1.000000 seconds 00:22:18.485 00:22:18.485 Latency(us) 00:22:18.485 [2024-11-08T07:49:36.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.485 [2024-11-08T07:49:36.446Z] =================================================================================================================== 00:22:18.485 [2024-11-08T07:49:36.446Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.485 07:49:36 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:18.485 07:49:36 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:18.485 07:49:36 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85010' 00:22:18.485 07:49:36 keyring_file -- common/autotest_common.sh@971 -- # kill 85010 00:22:18.485 07:49:36 keyring_file -- common/autotest_common.sh@976 -- # wait 85010 00:22:18.745 07:49:36 keyring_file -- keyring/file.sh@118 -- # bperfpid=85249 00:22:18.745 07:49:36 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85249 /var/tmp/bperf.sock 00:22:18.745 07:49:36 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85249 ']' 00:22:18.745 07:49:36 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:18.745 07:49:36 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:18.745 07:49:36 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:18.745 07:49:36 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:22:18.745 "subsystems": [ 00:22:18.745 { 00:22:18.745 "subsystem": "keyring", 00:22:18.745 "config": [ 00:22:18.745 { 00:22:18.745 "method": "keyring_file_add_key", 00:22:18.745 "params": { 00:22:18.745 "name": "key0", 00:22:18.745 "path": "/tmp/tmp.V3Ym0cbEiz" 00:22:18.745 } 00:22:18.745 }, 00:22:18.745 { 00:22:18.745 "method": "keyring_file_add_key", 00:22:18.745 "params": { 00:22:18.745 "name": "key1", 00:22:18.745 "path": "/tmp/tmp.6pEiJ3x43d" 00:22:18.745 } 00:22:18.745 } 00:22:18.745 ] 00:22:18.745 }, 00:22:18.745 { 00:22:18.745 "subsystem": "iobuf", 00:22:18.745 "config": [ 00:22:18.745 { 00:22:18.745 "method": "iobuf_set_options", 00:22:18.745 "params": { 00:22:18.745 "small_pool_count": 8192, 00:22:18.745 "large_pool_count": 1024, 00:22:18.745 "small_bufsize": 8192, 00:22:18.745 "large_bufsize": 135168, 00:22:18.745 "enable_numa": false 00:22:18.745 } 00:22:18.745 } 00:22:18.745 ] 00:22:18.745 }, 00:22:18.745 { 00:22:18.745 "subsystem": "sock", 00:22:18.745 "config": [ 00:22:18.745 { 00:22:18.745 "method": "sock_set_default_impl", 00:22:18.745 "params": { 00:22:18.745 "impl_name": "uring" 00:22:18.745 } 00:22:18.745 }, 00:22:18.745 { 00:22:18.745 "method": "sock_impl_set_options", 00:22:18.745 "params": { 00:22:18.745 "impl_name": "ssl", 00:22:18.745 "recv_buf_size": 4096, 00:22:18.745 "send_buf_size": 4096, 00:22:18.745 "enable_recv_pipe": true, 00:22:18.745 "enable_quickack": false, 00:22:18.745 "enable_placement_id": 0, 00:22:18.745 "enable_zerocopy_send_server": true, 00:22:18.745 "enable_zerocopy_send_client": false, 00:22:18.745 "zerocopy_threshold": 0, 00:22:18.745 "tls_version": 0, 00:22:18.745 "enable_ktls": false 00:22:18.745 } 00:22:18.745 }, 00:22:18.745 { 00:22:18.745 "method": "sock_impl_set_options", 00:22:18.745 "params": { 00:22:18.745 "impl_name": "posix", 00:22:18.745 "recv_buf_size": 2097152, 00:22:18.745 "send_buf_size": 2097152, 00:22:18.745 "enable_recv_pipe": true, 00:22:18.745 "enable_quickack": false, 00:22:18.745 "enable_placement_id": 0, 00:22:18.745 "enable_zerocopy_send_server": true, 00:22:18.745 "enable_zerocopy_send_client": false, 00:22:18.745 "zerocopy_threshold": 0, 00:22:18.745 "tls_version": 0, 00:22:18.745 "enable_ktls": false 00:22:18.745 } 00:22:18.745 }, 00:22:18.745 { 00:22:18.745 "method": "sock_impl_set_options", 00:22:18.745 "params": { 00:22:18.745 "impl_name": "uring", 00:22:18.745 "recv_buf_size": 2097152, 00:22:18.745 "send_buf_size": 2097152, 00:22:18.745 "enable_recv_pipe": true, 00:22:18.745 "enable_quickack": false, 00:22:18.745 "enable_placement_id": 0, 00:22:18.745 "enable_zerocopy_send_server": false, 00:22:18.745 "enable_zerocopy_send_client": false, 00:22:18.745 "zerocopy_threshold": 0, 00:22:18.745 "tls_version": 0, 00:22:18.745 "enable_ktls": false 00:22:18.745 } 00:22:18.745 } 00:22:18.745 ] 00:22:18.745 }, 00:22:18.745 { 00:22:18.745 "subsystem": "vmd", 00:22:18.745 "config": [] 00:22:18.745 }, 00:22:18.745 { 00:22:18.745 "subsystem": "accel", 00:22:18.745 "config": [ 00:22:18.745 { 00:22:18.745 "method": "accel_set_options", 00:22:18.745 "params": { 00:22:18.745 "small_cache_size": 128, 00:22:18.745 "large_cache_size": 16, 00:22:18.745 "task_count": 2048, 00:22:18.745 "sequence_count": 2048, 00:22:18.745 "buf_count": 2048 00:22:18.745 } 00:22:18.745 } 00:22:18.745 ] 00:22:18.745 }, 00:22:18.745 { 00:22:18.745 "subsystem": "bdev", 00:22:18.745 "config": [ 00:22:18.745 { 00:22:18.745 "method": "bdev_set_options", 00:22:18.745 "params": { 00:22:18.745 "bdev_io_pool_size": 65535, 00:22:18.745 "bdev_io_cache_size": 256, 00:22:18.745 "bdev_auto_examine": true, 00:22:18.745 "iobuf_small_cache_size": 128, 00:22:18.745 "iobuf_large_cache_size": 16 00:22:18.745 } 00:22:18.745 }, 00:22:18.745 { 00:22:18.745 "method": "bdev_raid_set_options", 00:22:18.745 "params": { 00:22:18.745 "process_window_size_kb": 1024, 00:22:18.745 "process_max_bandwidth_mb_sec": 0 00:22:18.745 } 00:22:18.745 }, 00:22:18.745 { 00:22:18.745 "method": "bdev_iscsi_set_options", 00:22:18.745 "params": { 00:22:18.745 "timeout_sec": 30 00:22:18.745 } 00:22:18.745 }, 00:22:18.745 { 00:22:18.745 "method": "bdev_nvme_set_options", 00:22:18.746 "params": { 00:22:18.746 "action_on_timeout": "none", 00:22:18.746 "timeout_us": 0, 00:22:18.746 "timeout_admin_us": 0, 00:22:18.746 "keep_alive_timeout_ms": 10000, 00:22:18.746 "arbitration_burst": 0, 00:22:18.746 "low_priority_weight": 0, 00:22:18.746 "medium_priority_weight": 0, 00:22:18.746 "high_priority_weight": 0, 00:22:18.746 "nvme_adminq_poll_period_us": 10000, 00:22:18.746 "nvme_ioq_poll_period_us": 0, 00:22:18.746 "io_queue_requests": 512, 00:22:18.746 "delay_cmd_submit": true, 00:22:18.746 "transport_retry_count": 4, 00:22:18.746 "bdev_retry_count": 3, 00:22:18.746 "transport_ack_timeout": 0, 00:22:18.746 "ctrlr_loss_timeout_sec": 0, 00:22:18.746 "reconnect_delay_sec": 0, 00:22:18.746 "fast_io_fail_timeout_sec": 0, 00:22:18.746 "disable_auto_failback": false, 00:22:18.746 "generate_uuids": false, 00:22:18.746 "transport_tos": 0, 00:22:18.746 "nvme_error_stat": false, 00:22:18.746 "rdma_srq_size": 0, 00:22:18.746 "io_path_stat": false, 00:22:18.746 "allow_accel_sequence": false, 00:22:18.746 "rdma_max_cq_size": 0, 00:22:18.746 "rdma_cm_event_timeout_ms": 0, 00:22:18.746 "dhchap_digests": [ 00:22:18.746 "sha256", 00:22:18.746 "sha384", 00:22:18.746 "sha512" 00:22:18.746 ], 00:22:18.746 "dhchap_dhgroups": [ 00:22:18.746 "null", 00:22:18.746 "ffdhe2048", 00:22:18.746 "ffdhe3072", 00:22:18.746 "ffdhe4096", 00:22:18.746 "ffdhe6144", 00:22:18.746 "ffdhe8192" 00:22:18.746 ] 00:22:18.746 } 00:22:18.746 }, 00:22:18.746 { 00:22:18.746 "method": "bdev_nvme_attach_controller", 00:22:18.746 "params": { 00:22:18.746 "name": "nvme0", 00:22:18.746 "trtype": "TCP", 00:22:18.746 "adrfam": "IPv4", 00:22:18.746 "traddr": "127.0.0.1", 00:22:18.746 "trsvcid": "4420", 00:22:18.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:18.746 "prchk_reftag": false, 00:22:18.746 "prchk_guard": false, 00:22:18.746 "ctrlr_loss_timeout_sec": 0, 00:22:18.746 "reconnect_delay_sec": 0, 00:22:18.746 "fast_io_fail_timeout_sec": 0, 00:22:18.746 "psk": "key0", 00:22:18.746 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:18.746 "hdgst": false, 00:22:18.746 "ddgst": false, 00:22:18.746 "multipath": "multipath" 00:22:18.746 } 00:22:18.746 }, 00:22:18.746 { 00:22:18.746 "method": "bdev_nvme_set_hotplug", 00:22:18.746 "params": { 00:22:18.746 "period_us": 100000, 00:22:18.746 "enable": false 00:22:18.746 } 00:22:18.746 }, 00:22:18.746 { 00:22:18.746 "method": "bdev_wait_for_examine" 00:22:18.746 } 00:22:18.746 ] 00:22:18.746 }, 00:22:18.746 { 00:22:18.746 "subsystem": "nbd", 00:22:18.746 "config": [] 00:22:18.746 } 00:22:18.746 ] 00:22:18.746 }' 00:22:18.746 07:49:36 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:18.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:18.746 07:49:36 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:18.746 07:49:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:18.746 [2024-11-08 07:49:36.700767] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:22:19.007 [2024-11-08 07:49:36.701019] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85249 ] 00:22:19.007 [2024-11-08 07:49:36.842111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.007 [2024-11-08 07:49:36.906027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.266 [2024-11-08 07:49:37.064435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:19.266 [2024-11-08 07:49:37.134658] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:19.836 07:49:37 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:19.836 07:49:37 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:22:19.836 07:49:37 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:22:19.836 07:49:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:19.836 07:49:37 keyring_file -- keyring/file.sh@121 -- # jq length 00:22:20.095 07:49:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:22:20.095 07:49:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:22:20.095 07:49:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:20.095 07:49:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:20.095 07:49:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:20.095 07:49:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:20.095 07:49:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:20.355 07:49:38 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:22:20.355 07:49:38 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:22:20.355 07:49:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:20.355 07:49:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:20.355 07:49:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:20.355 07:49:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:20.355 07:49:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:20.355 07:49:38 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:22:20.355 07:49:38 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:22:20.355 07:49:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:20.355 07:49:38 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:22:20.614 07:49:38 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:22:20.614 07:49:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:22:20.614 07:49:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.V3Ym0cbEiz /tmp/tmp.6pEiJ3x43d 00:22:20.614 07:49:38 keyring_file -- keyring/file.sh@20 -- # killprocess 85249 00:22:20.614 07:49:38 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85249 ']' 00:22:20.614 07:49:38 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85249 00:22:20.614 07:49:38 keyring_file -- common/autotest_common.sh@957 -- # uname 00:22:20.614 07:49:38 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:20.614 07:49:38 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85249 00:22:20.614 killing process with pid 85249 00:22:20.614 Received shutdown signal, test time was about 1.000000 seconds 00:22:20.614 00:22:20.615 Latency(us) 00:22:20.615 [2024-11-08T07:49:38.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.615 [2024-11-08T07:49:38.576Z] =================================================================================================================== 00:22:20.615 [2024-11-08T07:49:38.576Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:20.615 07:49:38 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:20.615 07:49:38 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:20.615 07:49:38 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85249' 00:22:20.615 07:49:38 keyring_file -- common/autotest_common.sh@971 -- # kill 85249 00:22:20.615 07:49:38 keyring_file -- common/autotest_common.sh@976 -- # wait 85249 00:22:20.874 07:49:38 keyring_file -- keyring/file.sh@21 -- # killprocess 84993 00:22:20.875 07:49:38 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 84993 ']' 00:22:20.875 07:49:38 keyring_file -- common/autotest_common.sh@956 -- # kill -0 84993 00:22:20.875 07:49:38 keyring_file -- common/autotest_common.sh@957 -- # uname 00:22:20.875 07:49:38 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:20.875 07:49:38 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84993 00:22:21.134 killing process with pid 84993 00:22:21.134 07:49:38 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:21.134 07:49:38 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:21.135 07:49:38 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84993' 00:22:21.135 07:49:38 keyring_file -- common/autotest_common.sh@971 -- # kill 84993 00:22:21.135 07:49:38 keyring_file -- common/autotest_common.sh@976 -- # wait 84993 00:22:21.394 00:22:21.394 real 0m14.361s 00:22:21.394 user 0m34.157s 00:22:21.394 sys 0m3.756s 00:22:21.394 07:49:39 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:21.394 ************************************ 00:22:21.394 END TEST keyring_file 00:22:21.394 ************************************ 00:22:21.394 07:49:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:21.394 07:49:39 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:22:21.394 07:49:39 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:21.394 07:49:39 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:21.394 07:49:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:21.394 07:49:39 -- common/autotest_common.sh@10 -- # set +x 00:22:21.394 ************************************ 00:22:21.394 START TEST keyring_linux 00:22:21.394 ************************************ 00:22:21.394 07:49:39 keyring_linux -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:21.394 Joined session keyring: 718280341 00:22:21.394 * Looking for test storage... 00:22:21.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:21.394 07:49:39 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:21.394 07:49:39 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:21.394 07:49:39 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:22:21.655 07:49:39 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@345 -- # : 1 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:21.655 07:49:39 keyring_linux -- scripts/common.sh@368 -- # return 0 00:22:21.655 07:49:39 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:21.655 07:49:39 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:21.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.655 --rc genhtml_branch_coverage=1 00:22:21.655 --rc genhtml_function_coverage=1 00:22:21.655 --rc genhtml_legend=1 00:22:21.655 --rc geninfo_all_blocks=1 00:22:21.655 --rc geninfo_unexecuted_blocks=1 00:22:21.655 00:22:21.655 ' 00:22:21.655 07:49:39 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:21.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.655 --rc genhtml_branch_coverage=1 00:22:21.655 --rc genhtml_function_coverage=1 00:22:21.655 --rc genhtml_legend=1 00:22:21.655 --rc geninfo_all_blocks=1 00:22:21.655 --rc geninfo_unexecuted_blocks=1 00:22:21.655 00:22:21.655 ' 00:22:21.655 07:49:39 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:21.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.655 --rc genhtml_branch_coverage=1 00:22:21.655 --rc genhtml_function_coverage=1 00:22:21.655 --rc genhtml_legend=1 00:22:21.655 --rc geninfo_all_blocks=1 00:22:21.655 --rc geninfo_unexecuted_blocks=1 00:22:21.655 00:22:21.655 ' 00:22:21.655 07:49:39 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:21.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.655 --rc genhtml_branch_coverage=1 00:22:21.655 --rc genhtml_function_coverage=1 00:22:21.655 --rc genhtml_legend=1 00:22:21.655 --rc geninfo_all_blocks=1 00:22:21.655 --rc geninfo_unexecuted_blocks=1 00:22:21.655 00:22:21.655 ' 00:22:21.655 07:49:39 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:21.655 07:49:39 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:21.655 07:49:39 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:21.655 07:49:39 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.655 07:49:39 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.655 07:49:39 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.655 07:49:39 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.655 07:49:39 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.655 07:49:39 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.655 07:49:39 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b4f53fcb-853f-493d-bd98-9a37948dacaf 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=b4f53fcb-853f-493d-bd98-9a37948dacaf 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:21.656 07:49:39 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:22:21.656 07:49:39 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.656 07:49:39 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.656 07:49:39 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.656 07:49:39 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.656 07:49:39 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.656 07:49:39 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.656 07:49:39 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:21.656 07:49:39 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:21.656 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:21.656 07:49:39 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:21.656 07:49:39 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:21.656 07:49:39 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:21.656 07:49:39 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:21.656 07:49:39 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:21.656 07:49:39 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:21.656 /tmp/:spdk-test:key0 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:21.656 07:49:39 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:22:21.656 07:49:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:21.656 /tmp/:spdk-test:key1 00:22:21.656 07:49:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:21.656 07:49:39 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85372 00:22:21.656 07:49:39 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:21.656 07:49:39 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85372 00:22:21.656 07:49:39 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 85372 ']' 00:22:21.656 07:49:39 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.656 07:49:39 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:21.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.656 07:49:39 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.656 07:49:39 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:21.656 07:49:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:21.915 [2024-11-08 07:49:39.653461] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:22:21.915 [2024-11-08 07:49:39.653571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85372 ] 00:22:21.915 [2024-11-08 07:49:39.801302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.915 [2024-11-08 07:49:39.845091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.174 [2024-11-08 07:49:39.900153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:22.174 07:49:40 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:22.174 07:49:40 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:22:22.174 07:49:40 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:22.174 07:49:40 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.174 07:49:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:22.174 [2024-11-08 07:49:40.074451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.174 null0 00:22:22.174 [2024-11-08 07:49:40.106429] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:22.174 [2024-11-08 07:49:40.106590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:22.174 07:49:40 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.174 07:49:40 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:22.174 419484444 00:22:22.174 07:49:40 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:22.433 984691114 00:22:22.433 07:49:40 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85388 00:22:22.433 07:49:40 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85388 /var/tmp/bperf.sock 00:22:22.433 07:49:40 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 85388 ']' 00:22:22.433 07:49:40 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:22.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:22.433 07:49:40 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:22.433 07:49:40 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:22.433 07:49:40 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:22.433 07:49:40 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:22.433 07:49:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:22.433 [2024-11-08 07:49:40.192455] Starting SPDK v25.01-pre git sha1 e729adafb / DPDK 24.03.0 initialization... 00:22:22.433 [2024-11-08 07:49:40.192564] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85388 ] 00:22:22.433 [2024-11-08 07:49:40.349460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.692 [2024-11-08 07:49:40.429424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.262 07:49:41 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:23.262 07:49:41 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:22:23.262 07:49:41 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:23.262 07:49:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:23.521 07:49:41 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:23.521 07:49:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:23.780 [2024-11-08 07:49:41.691572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:24.040 07:49:41 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:24.040 07:49:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:24.040 [2024-11-08 07:49:41.981004] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.299 nvme0n1 00:22:24.299 07:49:42 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:24.299 07:49:42 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:24.299 07:49:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:24.299 07:49:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:24.299 07:49:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:24.299 07:49:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:24.299 07:49:42 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:24.299 07:49:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:24.299 07:49:42 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:24.299 07:49:42 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:24.299 07:49:42 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:24.299 07:49:42 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:24.299 07:49:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:24.559 07:49:42 keyring_linux -- keyring/linux.sh@25 -- # sn=419484444 00:22:24.559 07:49:42 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:24.559 07:49:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:24.559 07:49:42 keyring_linux -- keyring/linux.sh@26 -- # [[ 419484444 == \4\1\9\4\8\4\4\4\4 ]] 00:22:24.559 07:49:42 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 419484444 00:22:24.818 07:49:42 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:24.818 07:49:42 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:24.818 Running I/O for 1 seconds... 00:22:25.757 17971.00 IOPS, 70.20 MiB/s 00:22:25.757 Latency(us) 00:22:25.757 [2024-11-08T07:49:43.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.757 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:25.757 nvme0n1 : 1.01 17969.37 70.19 0.00 0.00 7095.70 6459.98 14792.41 00:22:25.757 [2024-11-08T07:49:43.718Z] =================================================================================================================== 00:22:25.757 [2024-11-08T07:49:43.718Z] Total : 17969.37 70.19 0.00 0.00 7095.70 6459.98 14792.41 00:22:25.757 { 00:22:25.757 "results": [ 00:22:25.757 { 00:22:25.757 "job": "nvme0n1", 00:22:25.757 "core_mask": "0x2", 00:22:25.757 "workload": "randread", 00:22:25.757 "status": "finished", 00:22:25.757 "queue_depth": 128, 00:22:25.757 "io_size": 4096, 00:22:25.757 "runtime": 1.007214, 00:22:25.757 "iops": 17969.3689722343, 00:22:25.757 "mibps": 70.19284754779024, 00:22:25.757 "io_failed": 0, 00:22:25.757 "io_timeout": 0, 00:22:25.757 "avg_latency_us": 7095.703707071425, 00:22:25.757 "min_latency_us": 6459.977142857143, 00:22:25.757 "max_latency_us": 14792.411428571428 00:22:25.757 } 00:22:25.757 ], 00:22:25.757 "core_count": 1 00:22:25.757 } 00:22:25.757 07:49:43 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:25.757 07:49:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:26.016 07:49:43 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:26.016 07:49:43 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:26.016 07:49:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:26.016 07:49:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:26.016 07:49:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:26.016 07:49:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:26.276 07:49:44 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:26.276 07:49:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:26.276 07:49:44 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:26.276 07:49:44 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:26.276 07:49:44 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:22:26.276 07:49:44 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:26.276 07:49:44 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:26.276 07:49:44 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.276 07:49:44 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:26.276 07:49:44 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.276 07:49:44 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:26.276 07:49:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:26.536 [2024-11-08 07:49:44.430914] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:26.536 [2024-11-08 07:49:44.431574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb925d0 (107): Transport endpoint is not connected 00:22:26.536 [2024-11-08 07:49:44.432560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb925d0 (9): Bad file descriptor 00:22:26.536 [2024-11-08 07:49:44.433559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:22:26.536 [2024-11-08 07:49:44.433583] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:26.536 [2024-11-08 07:49:44.433593] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:26.536 [2024-11-08 07:49:44.433605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:22:26.536 request: 00:22:26.536 { 00:22:26.536 "name": "nvme0", 00:22:26.536 "trtype": "tcp", 00:22:26.536 "traddr": "127.0.0.1", 00:22:26.536 "adrfam": "ipv4", 00:22:26.536 "trsvcid": "4420", 00:22:26.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:26.536 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:26.536 "prchk_reftag": false, 00:22:26.536 "prchk_guard": false, 00:22:26.536 "hdgst": false, 00:22:26.536 "ddgst": false, 00:22:26.536 "psk": ":spdk-test:key1", 00:22:26.536 "allow_unrecognized_csi": false, 00:22:26.536 "method": "bdev_nvme_attach_controller", 00:22:26.536 "req_id": 1 00:22:26.536 } 00:22:26.536 Got JSON-RPC error response 00:22:26.536 response: 00:22:26.536 { 00:22:26.536 "code": -5, 00:22:26.536 "message": "Input/output error" 00:22:26.536 } 00:22:26.536 07:49:44 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:22:26.536 07:49:44 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:26.536 07:49:44 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:26.536 07:49:44 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@33 -- # sn=419484444 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 419484444 00:22:26.536 1 links removed 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@33 -- # sn=984691114 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 984691114 00:22:26.536 1 links removed 00:22:26.536 07:49:44 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85388 00:22:26.536 07:49:44 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 85388 ']' 00:22:26.536 07:49:44 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 85388 00:22:26.536 07:49:44 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:22:26.536 07:49:44 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:26.536 07:49:44 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85388 00:22:26.796 killing process with pid 85388 00:22:26.796 Received shutdown signal, test time was about 1.000000 seconds 00:22:26.796 00:22:26.796 Latency(us) 00:22:26.796 [2024-11-08T07:49:44.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.796 [2024-11-08T07:49:44.757Z] =================================================================================================================== 00:22:26.796 [2024-11-08T07:49:44.757Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.796 07:49:44 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:26.796 07:49:44 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:26.796 07:49:44 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85388' 00:22:26.796 07:49:44 keyring_linux -- common/autotest_common.sh@971 -- # kill 85388 00:22:26.796 07:49:44 keyring_linux -- common/autotest_common.sh@976 -- # wait 85388 00:22:27.055 07:49:44 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85372 00:22:27.055 07:49:44 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 85372 ']' 00:22:27.055 07:49:44 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 85372 00:22:27.055 07:49:44 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:22:27.055 07:49:44 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:27.055 07:49:44 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85372 00:22:27.055 07:49:44 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:27.055 killing process with pid 85372 00:22:27.055 07:49:44 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:27.055 07:49:44 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85372' 00:22:27.055 07:49:44 keyring_linux -- common/autotest_common.sh@971 -- # kill 85372 00:22:27.056 07:49:44 keyring_linux -- common/autotest_common.sh@976 -- # wait 85372 00:22:27.315 00:22:27.315 real 0m5.928s 00:22:27.315 user 0m11.268s 00:22:27.315 sys 0m1.894s 00:22:27.315 07:49:45 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:27.315 07:49:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:27.315 ************************************ 00:22:27.315 END TEST keyring_linux 00:22:27.315 ************************************ 00:22:27.315 07:49:45 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:22:27.315 07:49:45 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:27.315 07:49:45 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:27.315 07:49:45 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:22:27.315 07:49:45 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:22:27.315 07:49:45 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:22:27.315 07:49:45 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:27.315 07:49:45 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:27.315 07:49:45 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:27.315 07:49:45 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:22:27.315 07:49:45 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:27.315 07:49:45 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:22:27.315 07:49:45 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:27.315 07:49:45 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:27.315 07:49:45 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:22:27.315 07:49:45 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:22:27.315 07:49:45 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:22:27.315 07:49:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:27.315 07:49:45 -- common/autotest_common.sh@10 -- # set +x 00:22:27.315 07:49:45 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:22:27.315 07:49:45 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:22:27.315 07:49:45 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:22:27.315 07:49:45 -- common/autotest_common.sh@10 -- # set +x 00:22:29.855 INFO: APP EXITING 00:22:29.855 INFO: killing all VMs 00:22:29.855 INFO: killing vhost app 00:22:29.855 INFO: EXIT DONE 00:22:30.795 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:30.795 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:30.795 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:31.736 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:31.736 Cleaning 00:22:31.736 Removing: /var/run/dpdk/spdk0/config 00:22:31.736 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:31.736 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:31.736 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:31.736 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:31.736 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:31.736 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:31.736 Removing: /var/run/dpdk/spdk1/config 00:22:31.736 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:31.736 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:31.736 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:31.736 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:31.736 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:31.736 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:31.736 Removing: /var/run/dpdk/spdk2/config 00:22:31.736 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:31.736 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:31.736 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:31.736 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:31.997 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:31.997 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:31.997 Removing: /var/run/dpdk/spdk3/config 00:22:31.997 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:31.997 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:31.997 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:31.997 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:31.997 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:31.997 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:31.997 Removing: /var/run/dpdk/spdk4/config 00:22:31.997 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:31.997 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:31.997 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:31.997 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:31.997 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:31.997 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:31.997 Removing: /dev/shm/nvmf_trace.0 00:22:31.997 Removing: /dev/shm/spdk_tgt_trace.pid56706 00:22:31.997 Removing: /var/run/dpdk/spdk0 00:22:31.997 Removing: /var/run/dpdk/spdk1 00:22:31.997 Removing: /var/run/dpdk/spdk2 00:22:31.997 Removing: /var/run/dpdk/spdk3 00:22:31.997 Removing: /var/run/dpdk/spdk4 00:22:31.997 Removing: /var/run/dpdk/spdk_pid56553 00:22:31.997 Removing: /var/run/dpdk/spdk_pid56706 00:22:31.997 Removing: /var/run/dpdk/spdk_pid56912 00:22:31.997 Removing: /var/run/dpdk/spdk_pid56993 00:22:31.997 Removing: /var/run/dpdk/spdk_pid57013 00:22:31.997 Removing: /var/run/dpdk/spdk_pid57123 00:22:31.997 Removing: /var/run/dpdk/spdk_pid57141 00:22:31.997 Removing: /var/run/dpdk/spdk_pid57280 00:22:31.997 Removing: /var/run/dpdk/spdk_pid57477 00:22:31.997 Removing: /var/run/dpdk/spdk_pid57625 00:22:31.997 Removing: /var/run/dpdk/spdk_pid57703 00:22:31.997 Removing: /var/run/dpdk/spdk_pid57780 00:22:31.997 Removing: /var/run/dpdk/spdk_pid57873 00:22:31.997 Removing: /var/run/dpdk/spdk_pid57951 00:22:31.997 Removing: /var/run/dpdk/spdk_pid57984 00:22:31.997 Removing: /var/run/dpdk/spdk_pid58019 00:22:31.997 Removing: /var/run/dpdk/spdk_pid58089 00:22:31.997 Removing: /var/run/dpdk/spdk_pid58183 00:22:31.997 Removing: /var/run/dpdk/spdk_pid58617 00:22:31.997 Removing: /var/run/dpdk/spdk_pid58656 00:22:31.997 Removing: /var/run/dpdk/spdk_pid58707 00:22:31.997 Removing: /var/run/dpdk/spdk_pid58723 00:22:31.997 Removing: /var/run/dpdk/spdk_pid58790 00:22:31.997 Removing: /var/run/dpdk/spdk_pid58806 00:22:31.997 Removing: /var/run/dpdk/spdk_pid58873 00:22:31.997 Removing: /var/run/dpdk/spdk_pid58889 00:22:31.997 Removing: /var/run/dpdk/spdk_pid58929 00:22:31.997 Removing: /var/run/dpdk/spdk_pid58947 00:22:31.997 Removing: /var/run/dpdk/spdk_pid58993 00:22:31.997 Removing: /var/run/dpdk/spdk_pid59003 00:22:31.997 Removing: /var/run/dpdk/spdk_pid59134 00:22:31.997 Removing: /var/run/dpdk/spdk_pid59169 00:22:31.997 Removing: /var/run/dpdk/spdk_pid59246 00:22:31.997 Removing: /var/run/dpdk/spdk_pid59586 00:22:31.997 Removing: /var/run/dpdk/spdk_pid59598 00:22:31.997 Removing: /var/run/dpdk/spdk_pid59634 00:22:31.997 Removing: /var/run/dpdk/spdk_pid59648 00:22:31.997 Removing: /var/run/dpdk/spdk_pid59663 00:22:31.997 Removing: /var/run/dpdk/spdk_pid59682 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59696 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59711 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59730 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59744 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59759 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59778 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59792 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59813 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59831 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59840 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59861 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59879 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59888 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59909 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59938 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59953 00:22:32.257 Removing: /var/run/dpdk/spdk_pid59982 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60049 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60083 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60087 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60121 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60125 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60138 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60175 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60194 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60217 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60232 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60236 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60253 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60257 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60271 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60276 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60285 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60314 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60340 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60350 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60378 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60388 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60390 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60436 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60447 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60474 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60486 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60489 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60491 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60504 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60506 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60519 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60521 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60603 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60647 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60754 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60787 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60827 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60847 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60858 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60878 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60910 00:22:32.257 Removing: /var/run/dpdk/spdk_pid60926 00:22:32.257 Removing: /var/run/dpdk/spdk_pid61005 00:22:32.257 Removing: /var/run/dpdk/spdk_pid61022 00:22:32.257 Removing: /var/run/dpdk/spdk_pid61067 00:22:32.257 Removing: /var/run/dpdk/spdk_pid61123 00:22:32.257 Removing: /var/run/dpdk/spdk_pid61168 00:22:32.517 Removing: /var/run/dpdk/spdk_pid61204 00:22:32.517 Removing: /var/run/dpdk/spdk_pid61304 00:22:32.517 Removing: /var/run/dpdk/spdk_pid61347 00:22:32.517 Removing: /var/run/dpdk/spdk_pid61385 00:22:32.517 Removing: /var/run/dpdk/spdk_pid61611 00:22:32.517 Removing: /var/run/dpdk/spdk_pid61709 00:22:32.517 Removing: /var/run/dpdk/spdk_pid61737 00:22:32.517 Removing: /var/run/dpdk/spdk_pid61767 00:22:32.517 Removing: /var/run/dpdk/spdk_pid61795 00:22:32.517 Removing: /var/run/dpdk/spdk_pid61834 00:22:32.517 Removing: /var/run/dpdk/spdk_pid61866 00:22:32.517 Removing: /var/run/dpdk/spdk_pid61899 00:22:32.517 Removing: /var/run/dpdk/spdk_pid62295 00:22:32.517 Removing: /var/run/dpdk/spdk_pid62333 00:22:32.517 Removing: /var/run/dpdk/spdk_pid62666 00:22:32.517 Removing: /var/run/dpdk/spdk_pid63119 00:22:32.517 Removing: /var/run/dpdk/spdk_pid63385 00:22:32.517 Removing: /var/run/dpdk/spdk_pid64240 00:22:32.517 Removing: /var/run/dpdk/spdk_pid65144 00:22:32.517 Removing: /var/run/dpdk/spdk_pid65261 00:22:32.517 Removing: /var/run/dpdk/spdk_pid65323 00:22:32.517 Removing: /var/run/dpdk/spdk_pid66741 00:22:32.517 Removing: /var/run/dpdk/spdk_pid67044 00:22:32.517 Removing: /var/run/dpdk/spdk_pid70699 00:22:32.517 Removing: /var/run/dpdk/spdk_pid71059 00:22:32.517 Removing: /var/run/dpdk/spdk_pid71169 00:22:32.517 Removing: /var/run/dpdk/spdk_pid71310 00:22:32.517 Removing: /var/run/dpdk/spdk_pid71335 00:22:32.517 Removing: /var/run/dpdk/spdk_pid71369 00:22:32.517 Removing: /var/run/dpdk/spdk_pid71390 00:22:32.517 Removing: /var/run/dpdk/spdk_pid71484 00:22:32.517 Removing: /var/run/dpdk/spdk_pid71614 00:22:32.517 Removing: /var/run/dpdk/spdk_pid71766 00:22:32.517 Removing: /var/run/dpdk/spdk_pid71842 00:22:32.517 Removing: /var/run/dpdk/spdk_pid72034 00:22:32.517 Removing: /var/run/dpdk/spdk_pid72117 00:22:32.517 Removing: /var/run/dpdk/spdk_pid72204 00:22:32.517 Removing: /var/run/dpdk/spdk_pid72561 00:22:32.517 Removing: /var/run/dpdk/spdk_pid72986 00:22:32.517 Removing: /var/run/dpdk/spdk_pid72987 00:22:32.517 Removing: /var/run/dpdk/spdk_pid72988 00:22:32.517 Removing: /var/run/dpdk/spdk_pid73251 00:22:32.517 Removing: /var/run/dpdk/spdk_pid73504 00:22:32.517 Removing: /var/run/dpdk/spdk_pid73893 00:22:32.517 Removing: /var/run/dpdk/spdk_pid73899 00:22:32.517 Removing: /var/run/dpdk/spdk_pid74221 00:22:32.517 Removing: /var/run/dpdk/spdk_pid74236 00:22:32.517 Removing: /var/run/dpdk/spdk_pid74254 00:22:32.517 Removing: /var/run/dpdk/spdk_pid74285 00:22:32.517 Removing: /var/run/dpdk/spdk_pid74290 00:22:32.517 Removing: /var/run/dpdk/spdk_pid74636 00:22:32.517 Removing: /var/run/dpdk/spdk_pid74686 00:22:32.517 Removing: /var/run/dpdk/spdk_pid75021 00:22:32.517 Removing: /var/run/dpdk/spdk_pid75213 00:22:32.517 Removing: /var/run/dpdk/spdk_pid75647 00:22:32.517 Removing: /var/run/dpdk/spdk_pid76201 00:22:32.517 Removing: /var/run/dpdk/spdk_pid77045 00:22:32.517 Removing: /var/run/dpdk/spdk_pid77687 00:22:32.517 Removing: /var/run/dpdk/spdk_pid77689 00:22:32.777 Removing: /var/run/dpdk/spdk_pid79702 00:22:32.777 Removing: /var/run/dpdk/spdk_pid79755 00:22:32.777 Removing: /var/run/dpdk/spdk_pid79802 00:22:32.777 Removing: /var/run/dpdk/spdk_pid79859 00:22:32.777 Removing: /var/run/dpdk/spdk_pid79979 00:22:32.777 Removing: /var/run/dpdk/spdk_pid80039 00:22:32.777 Removing: /var/run/dpdk/spdk_pid80086 00:22:32.777 Removing: /var/run/dpdk/spdk_pid80139 00:22:32.777 Removing: /var/run/dpdk/spdk_pid80516 00:22:32.777 Removing: /var/run/dpdk/spdk_pid81720 00:22:32.777 Removing: /var/run/dpdk/spdk_pid81853 00:22:32.777 Removing: /var/run/dpdk/spdk_pid82095 00:22:32.777 Removing: /var/run/dpdk/spdk_pid82699 00:22:32.777 Removing: /var/run/dpdk/spdk_pid82864 00:22:32.777 Removing: /var/run/dpdk/spdk_pid83021 00:22:32.777 Removing: /var/run/dpdk/spdk_pid83124 00:22:32.777 Removing: /var/run/dpdk/spdk_pid83287 00:22:32.777 Removing: /var/run/dpdk/spdk_pid83401 00:22:32.777 Removing: /var/run/dpdk/spdk_pid84126 00:22:32.777 Removing: /var/run/dpdk/spdk_pid84156 00:22:32.777 Removing: /var/run/dpdk/spdk_pid84191 00:22:32.777 Removing: /var/run/dpdk/spdk_pid84458 00:22:32.777 Removing: /var/run/dpdk/spdk_pid84488 00:22:32.777 Removing: /var/run/dpdk/spdk_pid84523 00:22:32.777 Removing: /var/run/dpdk/spdk_pid84993 00:22:32.777 Removing: /var/run/dpdk/spdk_pid85010 00:22:32.777 Removing: /var/run/dpdk/spdk_pid85249 00:22:32.777 Removing: /var/run/dpdk/spdk_pid85372 00:22:32.777 Removing: /var/run/dpdk/spdk_pid85388 00:22:32.777 Clean 00:22:32.777 07:49:50 -- common/autotest_common.sh@1451 -- # return 0 00:22:32.777 07:49:50 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:22:32.777 07:49:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:32.777 07:49:50 -- common/autotest_common.sh@10 -- # set +x 00:22:32.777 07:49:50 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:22:32.777 07:49:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:32.777 07:49:50 -- common/autotest_common.sh@10 -- # set +x 00:22:33.037 07:49:50 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:33.037 07:49:50 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:33.037 07:49:50 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:33.037 07:49:50 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:22:33.037 07:49:50 -- spdk/autotest.sh@394 -- # hostname 00:22:33.037 07:49:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:33.296 geninfo: WARNING: invalid characters removed from testname! 00:22:59.868 07:50:15 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:00.808 07:50:18 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:02.713 07:50:20 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:04.616 07:50:22 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:07.149 07:50:24 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:09.683 07:50:27 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:11.589 07:50:29 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:11.589 07:50:29 -- spdk/autorun.sh@1 -- $ timing_finish 00:23:11.589 07:50:29 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:23:11.589 07:50:29 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:11.589 07:50:29 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:11.589 07:50:29 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:11.589 + [[ -n 5262 ]] 00:23:11.589 + sudo kill 5262 00:23:11.854 [Pipeline] } 00:23:11.869 [Pipeline] // timeout 00:23:11.874 [Pipeline] } 00:23:11.888 [Pipeline] // stage 00:23:11.892 [Pipeline] } 00:23:11.905 [Pipeline] // catchError 00:23:11.914 [Pipeline] stage 00:23:11.917 [Pipeline] { (Stop VM) 00:23:11.929 [Pipeline] sh 00:23:12.209 + vagrant halt 00:23:16.401 ==> default: Halting domain... 00:23:22.979 [Pipeline] sh 00:23:23.258 + vagrant destroy -f 00:23:26.546 ==> default: Removing domain... 00:23:26.558 [Pipeline] sh 00:23:26.840 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:26.848 [Pipeline] } 00:23:26.863 [Pipeline] // stage 00:23:26.868 [Pipeline] } 00:23:26.882 [Pipeline] // dir 00:23:26.887 [Pipeline] } 00:23:26.903 [Pipeline] // wrap 00:23:26.908 [Pipeline] } 00:23:26.921 [Pipeline] // catchError 00:23:26.930 [Pipeline] stage 00:23:26.932 [Pipeline] { (Epilogue) 00:23:26.945 [Pipeline] sh 00:23:27.227 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:32.523 [Pipeline] catchError 00:23:32.525 [Pipeline] { 00:23:32.538 [Pipeline] sh 00:23:32.863 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:33.122 Artifacts sizes are good 00:23:33.132 [Pipeline] } 00:23:33.147 [Pipeline] // catchError 00:23:33.157 [Pipeline] archiveArtifacts 00:23:33.164 Archiving artifacts 00:23:33.304 [Pipeline] cleanWs 00:23:33.316 [WS-CLEANUP] Deleting project workspace... 00:23:33.316 [WS-CLEANUP] Deferred wipeout is used... 00:23:33.323 [WS-CLEANUP] done 00:23:33.325 [Pipeline] } 00:23:33.340 [Pipeline] // stage 00:23:33.346 [Pipeline] } 00:23:33.361 [Pipeline] // node 00:23:33.366 [Pipeline] End of Pipeline 00:23:33.403 Finished: SUCCESS